forum_id
stringlengths
9
20
forum_title
stringlengths
3
179
forum_authors
sequencelengths
0
82
forum_abstract
stringlengths
1
3.52k
forum_keywords
sequencelengths
1
29
forum_decision
stringclasses
22 values
forum_pdf_url
stringlengths
39
50
forum_url
stringlengths
41
52
venue
stringclasses
46 values
year
stringdate
2013-01-01 00:00:00
2025-01-01 00:00:00
reviews
sequence
4wmf3Ffhl2
A Dynamic Model of Performative Human-ML Collaboration: Theory and Empirical Evidence
[ "Tom Sühr", "Samira Samadi", "Chiara Farronato" ]
Machine learning (ML) models are increasingly used in various applications, from recommendation systems in e-commerce to diagnosis prediction in healthcare. In this paper, we present a novel dynamic framework for thinking about the deployment of ML models in a performative, human-ML collaborative system. In our framework, the introduction of ML recommendations changes the data-generating process of human decisions, which are only a proxy to the ground truth and which are then used to train future versions of the model. We show that this dynamic process in principle can converge to different stable points, i.e. where the ML model and the Human+ML system have the same performance. Some of these stable points are suboptimal with respect to the actual ground truth. As a proof of concept, we conduct an empirical user study with 1,408 participants. In the study, humans solve instances of the knapsack problem with the help of machine learning predictions of varying performance. This is an ideal setting because we can identify the actual ground truth, and evaluate the performance of human decisions supported by ML recommendations. We find that for many levels of ML performance, humans can improve upon the ML predictions. We also find that the improvement could be even higher if humans rationally followed the ML recommendations. Finally, we test whether monetary incentives can increase the quality of human decisions, but we fail to find any positive effect. Using our empirical data to approximate our collaborative system suggests that the learning process would dynamically reach an equilibrium performance that is around 92% of the maximum knapsack value. Our results have practical implications for the deployment of ML models in contexts where human decisions may deviate from the indisputable ground truth.
[ "Human-AI Collaboration", "Human-Computer Interaction", "Dynamic Systems", "performative prediction", "strategic behavior", "human-in-the-loop", "dynamic learning", "deployment strategies" ]
Reject
https://openreview.net/pdf?id=4wmf3Ffhl2
https://openreview.net/forum?id=4wmf3Ffhl2
ICLR.cc/2025/Conference
2025
{ "note_id": [ "z2p6J9U8cC", "xWXx55ayzB", "x9VaK63AF1", "vHQJcc5Y5t", "dfRxeifoWN", "UBNkOYbznz", "AzayiHOXsf", "8UsDtKSwqh", "2vGYYEGOWy", "2gj9v1InW6", "2erjaypLW6", "1K4fpbtbSU" ], "note_type": [ "official_comment", "meta_review", "official_review", "official_comment", "official_comment", "official_review", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "decision" ], "note_created": [ 1732288016266, 1733783345563, 1730636548932, 1732288658208, 1732288278773, 1730083194529, 1730673543059, 1730632504479, 1732288168369, 1732288099040, 1732288457834, 1737524066881 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission10634/Authors" ], [ "ICLR.cc/2025/Conference/Submission10634/Area_Chair_kbuF" ], [ "ICLR.cc/2025/Conference/Submission10634/Reviewer_Zknp" ], [ "ICLR.cc/2025/Conference/Submission10634/Authors" ], [ "ICLR.cc/2025/Conference/Submission10634/Authors" ], [ "ICLR.cc/2025/Conference/Submission10634/Reviewer_t43N" ], [ "ICLR.cc/2025/Conference/Submission10634/Reviewer_kfdp" ], [ "ICLR.cc/2025/Conference/Submission10634/Reviewer_DvUe" ], [ "ICLR.cc/2025/Conference/Submission10634/Authors" ], [ "ICLR.cc/2025/Conference/Submission10634/Authors" ], [ "ICLR.cc/2025/Conference/Submission10634/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ] ], "structured_content_str": [ "{\"title\": \"Rebuttal of weaknesses\", \"comment\": \"We thank the reviewer for their extensive summary, comments, and questions. The reviewer is right that our empirical study is just an approximation of the performative prediction. We take care in calling it *\\u201ca proof of concept\\u201d* in the abstract and many other parts of the paper. In the contributions, we have adjusted our wording to further emphasize this: *\\u201cAs a proof of concept for our theory, we provide some empirical insights in a context where humans solve knapsack problems with the help of machine learning predictions.\\u201d* We emphasize it as a limitation in the conclusions (second to last paragraph). Yet, there are several reason for this simplification: a simulation of the iterative process, which is easier to implement, would have lacked real human feedback; a study with the full iterative process (where we train an algorithm, deploy it, collect data from humans, train again, and so on) was too costly to implement, and risked suffering from changes in the compositions of study participants solving the problems.\\n\\nThe advantage of our simplification is that it provides a first approximation to a phenomenon that we think is very important and (with generative AI) will become increasingly prevalent: subsequent model deployments affect the data generating process of data inputs to future models in ways that can deviate from the undisputable ground truth. We hope this is enough of a contribution, while acknowledging the limitations that the referee correctly points out. For example, in the conclusions we highlight that *\\u201cWe see this as a first proof of concept of collaborative characteristic functions, but much more work is needed to estimate these functions in real-world settings.\\u201d*\\n\\nWe do not see our assumption that utility trajectories are determined solely by population-wide average utility as too strict. In practice, assuming that the firm learning process perfectly learns the human-ML solution in the previous iteration allows us to move horizontally from the collaborative characteristic function to the 45-degree line (Figure 1). Deviations from this assumption would imply that the mapping from the human-ML performance to the next ML performance need not lie on the 45-degree line. As long as they are monotonically increasing (higher human-ML performance leads to higher next round\\u2019s ML performance), our insights remain. \\n\\nWe also agree that the 0-1 Knapsack problem is not a standard learning task. However the knapsack problem has many desirable properties for the empirical investigation of human-ML collaboration: participants require little training to be able to solve it (making recruitment of study participants affordable), the optimal solution is not obvious to humans but easy for us to calculate (so we can compare solutions to the undisputable ground truth), the optimal knapsack value is unique and unambiguous, even if there may exist more than one optimal solution (much harder to say for images and text), and we can generate instances at almost 0 cost. While we are aiming at designing user studies with more common learning tasks for future work, we hope that the benefits outweigh the disadvantages, as a first empirical setting.\"}", "{\"metareview\": \"The paper presents a dynamic framework for performative human-ML collaboration, modeling how human decisions influenced by ML predictions can alter the data-generating process. The authors propose a utility-based theoretical model using collaborative characteristic functions to describe human-ML interactions. Empirical evaluations involve participants solving combinatorial knapsack problems with varying levels of ML assistance.\\n\\nThe main strengths raised by the reviewers include the problem relevance and combination of theoretical modeling and real-world user experiments. However, several concerns were raised, including limited generalizability due to the focus on the knapsack problem and reliance on synthetic data. Overall, the paper would benefit from another round of revisions and review. We hope the authors find the reviewer comments helpful.\", \"additional_comments_on_reviewer_discussion\": \"There is a consensus among the reviewers.\"}", "{\"summary\": \"This paper studies the dynamic model of performative human-ml collaboration from both theoretical and empirical perspectives. The paper introduces the notion, the Collaborative Characteristic Function which connect the predicted label and the unknown ground truth. The paper does some empirical study that involves real human on the knapsack problems. Experimental results show that human tend to improve the model's performance, and human may submit worse results than the prediction by models.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"Originality: A substantive assessment of the strengths of the paper, touching on each of the following dimensions: originality, quality, clarity, and significance.\", \"quality\": \"The paper is supported by a robust empirical study involving 1,408 participants working on the knapsack problem. The statistical analyses performed provide strong support for the conclusions drawn, particularly regarding human improvements over ML predictions. Additionally, the paper critically examines the impact of monetary incentives on decision quality, contributing valuable insights to the field.\", \"clarity\": \"The paper's motivation and conclusion are clear.\", \"significance\": \"The paper gives some suggestions about the consideration of human behavior and the selection of the dataset to train the model.\", \"weaknesses\": \"1.Abstractness of Problem Domains: The study does not focus on specific classification or regression tasks, which makes the findings somewhat abstract. A more concrete application would enhance the practical relevance of the research.\\n2.Limited Application Scope: The research primarily concentrates on the knapsack problem, neglecting more realistic scenarios, such as medical diagnosis. Exploring applications in critical areas like healthcare would significantly increase the paper's impact and relevance.\\n3.Participant Preference Variability: While the study involves 1,408 participants, it lacks a detailed analysis of their preference differences. Understanding how individual preferences might affect decision-making is essential, as these variations could lead to suboptimal choices in certain instances.\\n4.Simulation of Human Behavior: Beyond conducting real experiments with participants, the paper does not explore the potential for simulating human behavior. Employing simulations could reduce the costs associated with extensive human experimentation while still providing valuable insights into collaborative decision-making dynamics.\", \"questions\": \"Can the author open-source the dataset provided by real human?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We thank the reviewer for pointing out the relevance of this research topic and for their feedback.\\n\\n1) Our contribution focuses not on introducing new methodologies, but on describing the phenomenon in a way that informs the thinking around algorithmic deployment in a collaborative setting. By introducing the latent utility, we aim to provide a foundation for characterizing deployment strategies that may be generally beneficial or harmful under certain assumptions in the collaborative setting. While we agree that addressing the implied methodological challenges are an important direction for future work, our goal is to offer a framework within which such challenges can be explored and discussed.\\n\\n2) We thank the reviewer for pointing out this inconsistency in our notation. We will denote $\\\\delta_{M_t}^X$ as the delta determined by Mt and $X$ and $\\\\delta_{M_t}$ for the expected delta over all $X$ and all $H$.\\nThank you for pointing out the missing $X$ in Definition 3. We changed that in our revised version of the paper along with your other suggestions.\\n\\n3) We think there is a misunderstanding of our notation for the proof of proposition 1. The expression $L(Y_{M_{t+1}}, Y_{H_t})=0$ is an annotation to the equality. The loss is 0 because of our perfect learning assumption. This is not a statement about $Ex\\\\in X(U(Y_{M_{t+1}}))$, ie, it is not assumed to be equal to zero. Observation 2 follows from the definition of a utility function (Definition 1) which requires that utility is a proximity measure. However, we take your comment as an opportunity to describe the properties used for the proof of proposition 1 in more detail in the appendix.\\n\\n4) Two other reviewers in the team have emphasized that understanding incentives and human preferences when making decisions, with or without algorithmic recommendations, is important to understand the shape of the collaborative characteristic function. We are trying to strike the right balance between these two opposite views (shorten it vs. emphasize it). We faced the trade-off of exploring more factors affecting the collaborative characteristic function but with much less statistical power. We decided to investigate monetary incentives with high statistical power (which turned out to be important, given the null result). We see that as one example through which collaborative characteristic functions (which we hope are the contribution of our work) can vary. We hope this opens up several works on exploring collaborative characteristic functions in the real world, as we emphasize in the conclusions.\"}", "{\"title\": \"Rebuttal of weaknesses\", \"comment\": \"We thank the reviewer for highlighting the strengths of our papers. The reviewer is right that the paper already has a lot of content (and may be complex as a result), which is why we do not go deep into some of the extensions that the reviewer mentions (such as alternative problems to the knapsack, or exploring in depth why monetary incentives do not work). We tried our best to make the description as clear as possible. We will put substantial effort into improving the writing for a camera-ready version (and have already adjusted some wording as per the reviewer\\u2019s more specific questions).\\n\\nWe also agree that the knapsack problem is just one problem. However, its advantages make the application affordable with humans recruited through Prolific. In particular: participants require little training to be able to solve it (making the recruitment of study participants affordable), the optimal solution is not obvious to humans but easy for us to calculate (so we can compare solutions to the undisputable ground truth), the optimal knapsack value is unique and unambiguous, even if there may exist more than one optimal solution (much harder to say for images and text), and we can generate instances at almost 0 cost. While we are aiming at designing user studies with more common learning tasks for future work, we hope that one empirical context (with variations in monetary incentives) is worthwhile as a first empirical setting. In the conclusions, we emphasize the importance of additional and realistic empirical settings as follow-up research: *\\u201cwe see this as a first proof of concept of collaborative characteristic functions, but much more work is needed to estimate these functions in real-world settings.\\u201d*\\n\\nWe agree that an in-depth analysis of monetary incentives would be worthwhile. In practice, in our setting, understanding why humans react to monetary incentives is important to understand changes in the shape of our collaborative characteristic function and corresponding learning paths (from our work, we know at least that the lack of an effect of monetary incentives does not come from not understanding them, since we tested them with and without a comprehension question). However, we don\\u2019t see that as the main objective of our paper. We acknowledge that the collaborative characteristic function can take any value, and our contribution lies in offering a framework to understand dynamic learning: *\\u201cThe function \\u2206U can take any arbitrary form. Several factors can affect \\u2206U, e.g., humans\\u2019 attitudes towards algorithms, ML explanations, and monetary incentives (we empirically explore the latter in Section 4).\\u201d* In the conclusions as mentioned above, we emphasize that much more work is needed to understand the shape of these collaborative characteristic functions in the real world, and how humans\\u2019 incentives and preferences influence them.\"}", "{\"summary\": \"This paper presents a theoretical framework for describing the collaboration process between ML models and human decision-making. By defining a utility function and a collaborative characteristic function, it gives a sufficient condition for achieving a stable point in the optimal case. Additionally, an empirical experiment with real users offers interesting insights into practical applications.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The studied problem is important. Developing a collaborative system that integrates ML models with human decision-making to consistently achieve better outcomes is both a valuable and challenging topic for academia and industry.\\n2. A theoretical framework is proposed to describe the collaboration process and quantify the quality of solutions from both models and humans. A sufficient condition, which ensures non-decreasing utility, is provided to guarantee the achievement of a stable point.\\n3. An empirical experiment was conducted with real users, offering interesting insights into practical applications.\", \"weaknesses\": \"1.\\tThe theoretical framework primarily aims to describe the problem. Both the theory and convergence conditions rely on acquiring the utility function, which seems to be merely achievable with the knowledge of the ground truth. However, as discussed in Introduction and Related Works, a key intuition of this paper is addressing the inaccessibility of ground truth in real-world scenarios. Consequently, the theory offers limited insights at the methodological level.\\n2.\\tSeveral expressions and derivations are unclear and lack rigor. E.g., it seems that $\\\\delta_{M_t}$ in Eq. (3) should be determined jointly by ${M_t}$and $X$. In Definition 3 and Proof A.6, $U(H(X, Y_{M_t}))$ should be instead be written as $U(X, H(X, Y_{M_t}))$ instead. Additionally, the logic behind the proof of Proposition 1 is unclear, particularly why $E_{x \\\\in \\\\mathcal{X}}(U(Y_{M_{t+1})}=0)$ holds. And Observation 2 is also confusing. Why should the absolute difference in distance measures equate to the difference in utilities?\\n3.\\tThe authors devote substantial space to describing the experiment and results related to monetary incentives. However, since these are empirical observations of a single confounding factor in a specific scenario, they provide limited insight and generalizability from a broader perspective\", \"questions\": \"All my questions are listed in the weakness part.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": [\"This paper examines Human-ML collaboration under performative prediction settings through theoretical analysis and an empirical experiment on ML-assisted combinatorial knapsack problems.\", \"In the setup, users interact with a predictive system in discrete time steps. At each time step $t$, a model $M_t$ predicts a label $Y_{M_t}$ based on features $X$. This prediction serves as decision support for a human decision-maker, who then makes their own prediction $Y_{H_t}$\\u200b\\u200b. Pairs $(X,Y_{H_t})$ are used to train the subsequent model $M_{t+1}$, and it is assumed that $M_{t+1}$\\u200b perfectly aligns with its training distribution. Definition 1 introduces utility $\\\\mathbb{U}(X, Y)$ for prediction-label pairs, defining its properties axiomatically. It then defines a the collaborative characteristic function which captures one-step utility improvement, and $\\\\\\\\mathbb{L}_{\\\\\\\\Delta\\\\\\\\mathbb{U}}(s,t)$ as the trajectory of expected utilities for a system whose initial utility is $s$. Propositions 1 and 2 show that utility trajectories converge under monotonicity assumptions.\", \"The empirical section evaluates the impact of model-based advice on human solutions for the 0-1 knapsack problem. Human participants interact with an ML-supported system to solve knapsack problems, possibly receiving predictions of the optimal solution. Six models with varying accuracy were trained before the experiment using synthetic optimal solutions, and each experimental group received distinct models and possibly different monetary incentives. Results indicate that incentivization schemes had no significant impact on solution quality, while decision-support quality correlated with human solution quality. Collaborative learning trajectories were presented based on these results.\"], \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": [\"The paper addresses a well-motivated topic.\", \"The empirical analysis is grounded in data from real human subjects.\", \"Results seem to provide interesting insights into ML-assisted decision-making contexts.\"], \"weaknesses\": [\"The paper claims to provide an empirical evaluation of performative prediction but seems to lack essential elements of this setup. Specifically, prediction models were trained on synthetic data before the experiment, and the experiment does not include \\\"feedback loops\\\" which are a defining component of performative prediction.\", \"The theoretical analysis applies to a limited form of performative prediction, assuming that utility trajectories are determined solely by population-wide average utility, without taking the structure of the predictor into account. Functions like the collaborative learning path are interesting, it is not clear whether the definition are applicable in more general scenarios.\", \"The empirical approach uses an atypical learning task: predicting a binary solution vector for a combinatorial 0-1 knapsack problem based on random synthetic instances and optimal solutions. The paper notes a possible analogy to multi-task classification, but it\\u2019s unclear how results extend to conventional ML tasks on non-synthetic data.\"], \"questions\": [\"When are the conditions in Definition 1 expected to hold? Examples of suitable utility functions in binary classification and scalar regression can be very helpful.\", \"Could notation in eq. (1) be clarified? Specifically, $Y_{H_{t-1}}$ seems to appear both as an argument of the function, and as a variable sampled from $D_{t-1}$.\", \"How was Appendix Figure 6 (L403) generated?\", \"In the theoretical analysis, how would results change if the training set in each step was finite?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper primarily presents a new dynamic framework for thinking about the deployment of ML models in performative human-ML collaborative systems, helping to understand how ML influences human decision-making processes. This research is intriguing and has practical value.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"This paper has several strengths, as follows:\\n1.\\tA new dynamic framework is proposed for considering the deployment of ML models in human-ML collaborative systems.\\n2.\\tThe involvement of participants in real-world scenarios enhances the credibility of the research. The design of the empirical study allows for clear identification of the actual ground truth, providing evidence for the research results.\\n3.\\tThe findings of the paper have practical significance, aiding companies in optimizing the training and deployment strategies of ML models.\", \"weaknesses\": \"1.\\tThe paper is hard to follow, the complexity may make it difficult for readers to understand.\\n2.\\tThe research focuses primarily on the knapsack problem scenario, which may limit the generalizability of the results. It is recommended that the authors consider validation in different types of problems to enhance applicability.\\n3.\\tThe paper mentions the failure to find a positive impact of incentive mechanisms on human decision quality, and the explanation for this phenomenon is insufficient, leading to a superficial discussion of the incentive mechanisms without exploring their potential reasons.\", \"questions\": \"1.\\tThe paper initially presents that current human-ML collaborative systems face three crucial challenges, but the subsequent text does not detail the innovations made in addressing or alleviates these three issues. I hope to see a clear exploration of how the paper addresses or alleviates each of these challenges in the introduction.\\n2.\\tA deeper discussion on incentive mechanisms: Provide more discussion on the ineffectiveness of incentive mechanisms to help readers understand the potential reasons of this phenomenon.\\n3.\\tThe contributions are trivial, making readers difficult to understand the key points of this paper. I hope the author can rewrite their contributions.\\n4.\\tIn Definition 1, the definitions of Ymin and Y' are not specified. In Defination 5, the definition of x1\\u2026xn should be placed in the main text.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We thank the reviewer for their insightful comments and the question.\\n\\nWe will publish the human-labeled data along with the prediction of all ML models for each knapsack instance. We hope that this will be a valuable dataset for other research areas such as learning-to-defer, which usually have only few datasets with human and ML labels. \\nWe will also release the code for generating the hard knapsack instances, the models and all of our code for the data analysis, plot and post-processing of the instances. We will also release the code for our study platform which can be easily adapted for other experiments.\\n\\nAs for the reviewer\\u2019s concerns, we agree that the knapsack problem is abstract. We believe that, as a first proof of concept, the advantages of the knapsack problems outweigh the limitations that the reviewer correctly pointed out. In particular: participants require little training to be able to solve it (making the recruitment of study participants affordable), the optimal solution is not obvious to humans but easy for us to calculate (so we can compare solutions to the undisputable ground truth), the optimal knapsack value is unique and unambiguous, even if there may exist more than one optimal solution (much harder to say for images and text), and we can generate instances at almost 0 cost. While we are aiming at designing user studies with more common learning tasks for future work, we hope that the benefits outweigh the disadvantages, as a first empirical setting. In the conclusions, we emphasize the importance of realistic empirical settings as follow-up research: *\\u201cwe see this as a first proof of concept of collaborative characteristic functions, but much more work is needed to estimate these functions in real-world settings.\\u201d*\\n\\nWe agree that an in-depth analysis of human decisions and why or why not humans follow algorithmic recommendations is important. In practice, in our setting, understanding why humans react to algorithmic recommendations the way they do or why humans make the decisions they make effectively changes the shape of our collaborative characteristic function and corresponding learning paths. In the paper, we already have one empirical example that could potentially change humans\\u2019 decisions (monetary incentives), but exploring more of them would be a paper in and of itself. We emphasize this in the paper when we present the collaborative characteristic function (slightly edited from the previous version to incorporate the reviewer\\u2019s feedback): *\\u201cThe function \\u2206U can take any arbitrary form. Several factors can affect \\u2206U, e.g., humans\\u2019 attitudes towards algorithms, ML explanations, and monetary incentives (we empirically explore the latter in Section 4).\\u201d*\\nWe hope that making the data publicly available would allow others to dive deeper into this important question.\\n\\nWe agree that simulations would have been cheaper and faster. We feared that the criticism at that point would have been the lack of empirical data backing our framework. For example, the review team emphasizes the involvement of study participants in real-world scenarios as important to increase the credibility of the work. However, we can certainly add a simulation (of both a learning path that leads to a good equilibrium and a learning path that leads to a bad equilibrium) in the appendix for a camera-ready version of the paper.\"}", "{\"title\": \"Answer to Questions\", \"comment\": \"Regarding the reviewer\\u2019s specific questions:\\n\\nWe provide two utility functions in the paper for the 0-1 knapsack problem modeled as a multilabel binary classification problem for which Definition 1 holds. For one dimensional binary classification, the inverted 0-1 loss would satisfy all criteria of definition 1. It is bounded, epsilon sensitive because there are only two possible labels, which also makes the distance measure obsolete. For scalar regression, any quantized (eps sensitivity), bounded and inverted (low distance $\\\\Leftrightarrow$ high utility) distance metric should satisfy this definition. The main goal of the definition is to rule out two things: 1) that we will never reach a stable point because even the smallest improvement will be learned (epsilon sensitivity). This is automatically true in any real world setting. 2) That we don\\u2019t reach stable points because we move towards infinity utility forever. We would argue that this is also satisfied in any real world setting.\\nThank you for the remarks about eq. 1, which we will elaborate on in a potential camera ready version. The idea was to get the expectation over humans (outer expectation) and within humans over all instances that a single human works on (inner expectation). We see that this can be confusing as our notation suggests that this is a specific solution.\\n\\nAppendix Figure 6 was generated by sampling 10,25,50,75,100% of the human data (without the help of ml) that we collected during our experiment. We then trained the same model architecture on that data as the one in our study. We resampled+retrained 500 times for each sample size (10,25,50,75,100%). The goal was to check whether we exposed the humans to reasonable model performances when we trained on synthetic data (reasonable in the sense that the model performance achieved with human labeled data is similar to that of models trained on synthetic data).\\n\\nThank you for the question of what would happen if we would deal with finite data in our theory as this is an interesting one that we are currently working on as an extension of this work. We see it as very related to your prior comment that \\u201cutility trajectories are determined solely by population-wide average utility.\\u201d Deviations arising from finite data would imply that the mapping from the human-ML performance to the next ML performance need not lie on the 45-degree line (and may display variance that is a function of the sample size). As long as they are monotonically increasing (higher human-ML performance leads to higher next round\\u2019s ML performance), our insights remain, but the speed of convergence to equilibrium may be affected. This direction has important deployment decision implications and we think that the framework presented in this paper invites to explore various new related directions.\"}", "{\"title\": \"Answers to questions\", \"comment\": \"As for the specific questions:\\n\\n1) We understand that the use of \\u201cchallenges\\u201d can be misleading, and apologize for that. We are not trying to solve the three specific \\u201cchallenges.\\u201d Rather, the three challenges characterize the context we study (make it interesting), and motivate our dynamic learning framework. We have rephrased that sentence accordingly: *\\u201cThree key features characterize contexts where companies implement human-ML collaborative systems: 1) ML models learn from past human decisions, which are often only an approximation to the ground truth (noisy labels); 2) ML models are rolled out to help future human decisions, affecting the data-generating process of human-ML collaboration that then influences future updates to ML models (performative predictions as in Perdomo (2020)); and 3) the quality of the human-ML collaborative prediction of the ground truth may change as a function of incentives and other human factors. These features create a dynamic learning process.\\u201d*\\n\\n2) We are interested in incentive mechanisms only to the extent that they change our collaborative characteristic function (and other reviewers have also emphasized that this is only one mechanism through which collaborative characteristic functions can change shape). We are trying to strike the right balance between these two opposite views. We discuss at least one result on incentives in the paragraph starting with *\\u201cThe null effect of monetary incentives is not due to the fact that users did not understand the bonus structure\\u2026\\u201d* We also now have added text to emphasize that the discussion of incentive mechanisms is beyond the scope of our paper: *\\u201cWhile a deeper exploration of incentive mechanisms is beyond the scope of this paper, future research should explore how incentive design can change the shape of collaborative characteristic functions. We return to this in the Conclusions.\\u201d* Finally, our conclusions already emphasize that more work is needed along this dimension: *\\u201cStudying the interaction of monetary incentives and ML performance is an important extension. The null result of monetary incentives should be interpreted within our context. Specifically, the study participants received payments above minimum wage, and we only tested different levels of linear performance bonuses. It would be valuable to extend our work to evaluate the extent to which alternative base payments or non-linear bonuses may induce different levels of quality and effort by participants and thus collaborative characteristic functions of varying shapes.\\u201d*\\n\\n3) Our contributions highlight the theoretical framework, the empirical proof of concept, and the practical implications for companies deploying recommendations to help humans\\u2019 decision-making. We hope some of the clarifications above and rewriting of the text have helped clarify our work. \\nIn Definition 1, we specify $Y_{min}$\\u200b and $Y\\u2032$ as elements of $\\\\mathcal{Y}$. $Y_{min}$ is subsequently used in the first property to characterize the minimum utility for a given $X$, while $Y\\u2032$ is utilized in other points of Definition 1. We make sure this is explicit in the new version of the paper. We also appreciate your suggestion regarding the placement of $x_1, \\u2026, x_n$\\u200b in Definition 5. In the revised version of our paper, we have incorporated this definition into the main text as per your recommendation.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}" ] }
4wk2eOKGvh
Test-Time Ensemble via Linear Mode Connectivity: A Path to Better Adaptation
[ "Byungjai Kim", "Chanho Ahn", "Wissam J. Baddar", "Kikyung Kim", "HUIJIN LEE", "Saehyun Ahn", "Seungju Han", "Sungjoo Suh", "Eunho Yang" ]
Test-time adaptation updates pretrained models on the fly to handle distribution shifts in test data. While existing research has focused on stable optimization during adaptation, less attention has been given to enhancing model representations for adaptation capability. To address this gap, we propose Test-Time Ensemble (TTE) grounded in the intriguing property of linear mode connectivity. TTE leverages ensemble strategies during adaptation: 1) adaptively averaging the parameter weights of assorted test-time adapted models and 2) incorporating dropout to further promote representation diversity. These strategies encapsulate model diversity into a single model, avoiding computational burden associated with managing multiple models. Besides, we propose a robust knowledge distillation scheme to prevent model collapse, ensuring stable optimization and preserving the ensemble benefits during adaptation. Notably, TTE integrates seamlessly with existing TTA approaches, advancing their adaptation capabilities. In extensive experiments, integration with TTE consistently outperformed baseline models across various challenging scenarios, demonstrating its effectiveness and general applicability.
[ "test-time adaptation", "domain adaptation", "linear mode connectivity" ]
Accept (Poster)
https://openreview.net/pdf?id=4wk2eOKGvh
https://openreview.net/forum?id=4wk2eOKGvh
ICLR.cc/2025/Conference
2025
{ "note_id": [ "z8PXSk4JkV", "wjsfHys9Si", "tyXhZpeCVJ", "qP1qwZMzaZ", "p0GYzfBdjC", "okaQm6KXx5", "lLyxHfWIKU", "j0j7oBBqZZ", "asod3LvyZ4", "XAqqaRPTPV", "VrOO1GBgb4", "VgSq9djgea", "VgPogJE4NK", "UxqEUVDuqA", "Tpd3K8ELLw", "SXlVtZJOA3", "Qw44aJWbZl", "QSt1QZEMeR", "LdKU8Naiom", "GvZ5JIixK5", "AvbYDLqiwA", "9ntjrFpnjJ", "5n0b3IeLjp", "4o7NCXccss", "3PSLlYBpgO" ], "note_type": [ "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment" ], "note_created": [ 1732170633762, 1730147314705, 1732579803096, 1732170929551, 1731070137474, 1729937228204, 1732526947369, 1732169333751, 1732169094126, 1732171971292, 1732170967373, 1732170456475, 1732352415634, 1732512529893, 1732515173491, 1734415817122, 1732336205410, 1732516646648, 1730969119719, 1732527845184, 1732168756002, 1732171689218, 1737523694112, 1732354268928, 1732579708708 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission5251/Authors" ], [ "ICLR.cc/2025/Conference/Submission5251/Reviewer_NKQc" ], [ "ICLR.cc/2025/Conference/Submission5251/Authors" ], [ "ICLR.cc/2025/Conference/Submission5251/Authors" ], [ "ICLR.cc/2025/Conference/Submission5251/Reviewer_drM4" ], [ "ICLR.cc/2025/Conference/Submission5251/Reviewer_sF3M" ], [ "ICLR.cc/2025/Conference/Submission5251/Reviewer_drM4" ], [ "ICLR.cc/2025/Conference/Submission5251/Authors" ], [ "ICLR.cc/2025/Conference/Submission5251/Authors" ], [ "ICLR.cc/2025/Conference/Submission5251/Authors" ], [ "ICLR.cc/2025/Conference/Submission5251/Authors" ], [ "ICLR.cc/2025/Conference/Submission5251/Authors" ], [ "ICLR.cc/2025/Conference/Submission5251/Authors" ], [ "ICLR.cc/2025/Conference/Submission5251/Authors" ], [ "ICLR.cc/2025/Conference/Submission5251/Authors" ], [ "ICLR.cc/2025/Conference/Submission5251/Area_Chair_dAXu" ], [ "ICLR.cc/2025/Conference/Submission5251/Reviewer_NKQc" ], [ "ICLR.cc/2025/Conference/Submission5251/Authors" ], [ "ICLR.cc/2025/Conference/Submission5251/Reviewer_xGvc" ], [ "ICLR.cc/2025/Conference/Submission5251/Reviewer_sF3M" ], [ "ICLR.cc/2025/Conference/Submission5251/Authors" ], [ "ICLR.cc/2025/Conference/Submission5251/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission5251/Authors" ], [ "ICLR.cc/2025/Conference/Submission5251/Authors" ] ], "structured_content_str": [ "{\"title\": \"Response to Reviewer NKQc (2/4)\", \"comment\": \"> **W4. For Table 3, with continual TTA, the authors compared all other baselines with TTE with DeYO, but not TTE with other variants as well, and for continual TTA, I don't understand how the performance can still perform well in the direction of the adaptation without degrading too much as we can see in the other approaches (for example DeYO goes from 28.1 to 3.7 and then 7.2), for me it only make sense if you ensemble with zero-shot model as well (or reset the model weights) but if this is the case it should be done for all other baselines as well.**\\n\\nWe notify that we did not employ any parameter initialization/reset schemes with a pre-trained zero-shot model during TTA processes. Instead, TTE only relies on robust distillation to prevent model collapse (Please refer to the response to W2). The continual TTA with non-i.i.d. conditions (Table 3) is a challenging scenario, where TTA models are prone to collapse. The experiments were introduced to rigorously assess the robustness of TTE, contributed by the proposed knowledge distillation scheme described in the Section 3.2. Notably, the extensive experiments in this paper (Table 1-4) demonstrate that the integration of TTE effectively avoids collapsing with four test-time scenarios and four datasets. To improve clarity, we conducted additional experiments based on the reviewer\\u2019s comments. The details of the experiments are as follows:\\n\\n**Additional experiments with Tent+TTE and SAR+TTE**: To validate the robustness of TTE further, we revisited the experiments in Table 3 and conducted experiments with other baselines integrated with TTE (e.g., **Tent+TTE** and **SAR+TTE**). The results are consistent with the results of DeYO+TTE in Table 3, as follows:\\n\\n### Continual TTA with non-i.i.d. conditions. Average accuracy (\\\\%) with ImageNet-C.\\n| | ResNet50-GN | ViTBase |\\n|-|:-:|:-:|\\n| NoAdapt | 30.6 |29.9|\\n| Tent | 0.6 | 3.9 |\\n| Tent+TTE | **44.2(+43.6)** | **58.3(+54.4)** |\\n| SAR | 23.0 | 46.0 |\\n| SAR+TTE | **44.9(+21.9)** | **60.3(+14.3)** |\\n| DeYO | 2.8 | 53.5 |\\n| DeYO+TTE | **49.7(+46.9)** | **61.6(+8.1)** |\\n\\n**Additional experiments for catastrophic forgetting**: TTA methods often experience severe performance degradation on in-distribution data after adaptation, a phenomenon known as catastrophic forgetting. Following [1], we concurrently measured the accuracy on clean ImageNet dataset right after each adaptation to a distribution in the above experiments. The results show that the integration with TTE successfully avoids forgetting issues, while other baseline methods suffer from it, as follows.\\n\\n### Comparison of preventing catastrophic forgetting on Continual TTA with non i.i.d. conditions. Average accuracy (\\\\%) with clean ImageNet.\\n| | ResNet50-GN | ViTBase |\\n|-|:-:|:-:|\\n| NoAdapt | 80.0 |78.0|\\n| Tent | 10.1 | 9.8 |\\n| Tent+TTE | **75.1(+65.0)** | **79.9(+70.1)** |\\n| SAR | 56.7 | 76.2 |\\n| SAR+TTE | **75.4(+18.7)** | **80.4(+4.2)** |\\n| DeYO | 8.1 | 38.4 |\\n| DeYO+TTE | **71.3(+63.2)** | **77.9(+39.5)** |\\n\\nThe details of the additional experiments (i.e., accuracy for each shift) have been included in the **Figure 8** and **Table 11** in the final revision.\\n\\n> W5. For Table 4, column V2 seems strange, as almost all results in a) are 68.9, even DeYO and DeYO with TTE. Could you also add more results with other batch sizes? (the batch size can play an important role in different algorithms, which can benefit DeYO and not the others. For instance, I recommend taking a look at the paper \\\"Bag of Tricks for Fully Test-Time Adaptation, IEEE/CVF Winter Conference on Applications of Computer Vision. 2024\\\", which shows the role of batch size in some of the TTA algorithms.\\n\\nThank you for your insightful comments and paper recommendation. We address your comments point by point below.\\n\\n**Clarification on V2 Results**: ImageNet-V2 consists of data sampled after a decade of progress on the original ImageNet dataset and is used in our work to measure adaptation performance under intrinsic distribution shifts. While conventional TTA methods, including TTE, show promising results under extrinsic shifts (ImageNet-C, R, and S), they fail to provide performance gains for intrinsic shifts. This finding highlights the need for further investigation into handling such shifts effectively.\\n\\n**Batch Size Effect**: Previous studies [2, 3], including the one you referenced, have demonstrated that batch size significantly impacts TTA models, particularly those using batch normalization. Acknowledging this effect, we aimed to avoid the instability caused by batch normalization in this paper. To achieve this, we employed architectures that are robust to batch size variations, specifically Vision Transformers with layer normalization (ViTBase) and ResNet50 with group normalization (ResNet50-GN). This design choice ensures that TTA methods, even without TTE, exhibit stable adaptation performance under extreme small-batch settings, such as Batch Size 1 (as shown in **Tables 1 and 4**).\"}", "{\"summary\": \"The paper introduces the Test-Time Ensemble (TTE), a method designed for TTA using the theory of weights space ensemble, which can be used on top of different TTA methods. The authors show different results for TTA over corruptions with different baselines, and the method seems to work pretty well. Furthermore, the authors also provided results for continual TTA, which is interesting.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"The paper is well-written, easy to follow, and detailed. I liked how the authors presented the work and motivated toward the problem. Furthermore, the results are motivating, and the idea seems easy to implement on top of different methods (as demonstrated by the authors), which can be beneficial for the community if the authors also provide the full code for reproducibility.\", \"weaknesses\": [\"Personally, I did not see many problems with the paper, but I would suggest the authors proofread again to avoid problems such as the following typo \\\"with lager and more complex\\\" -> \\\"with larger and more complex\\\" in the introduction or \\\"Adaptvie momentum\\\" -> \\\"Adaptive momentum.\\\"\", \"If the authors work on the following points, it will improve a lot the quality of the work:\", \"I am not so convinced by section 3.2, DE-BIASED AND NOISE-ROBUST KNOWLEDGE DISTILLATION. Could you clarify this a bit more in this section? And maybe make it more clear in the paper.\", \"In Equation 5, there is no hyperparameter to balance the terms. I think it should be included, right?\", \"For Table 3, with continual TTA, the authors compared all other baselines with TTE with DeYO, but not TTE with other variants as well, and for continual TTA, I don't understand how the performance can still perform well in the direction of the adaptation without degrading too much as we can see in the other approaches (for example DeYO goes from 28.1 to 3.7 and then 7.2), for me it only make sense if you ensemble with zero-shot model as well (or reset the model weights) but if this is the case it should be done for all other baselines as well.\", \"For Table 4, column V2 seems strange, as almost all results in a) are 68.9, even DeYO and DeYO with TTE. Could you also add more results with other batch sizes? (the batch size can play an important role in different algorithms, which can benefit DeYO and not the others. For instance, I recommend taking a look at the paper \\\"Bag of Tricks for Fully Test-Time Adaptation, IEEE/CVF Winter Conference on Applications of Computer Vision. 2024\\\", which shows the role of batch size in some of the TTA algorithms.\", \"I would suggest revisiting some of the baselines for Tab 1. I would also consider a baseline with other methods of the local ensemble as well, such as SWA with TTA, and for Tab 4. I would also add other methods with TTE (maybe in the supp. material).\"], \"rebuttal_period\": \"After carefully reading all the answers provided by the authors, I feel that my questions were answered, and I don't have any additional questions. I am confident to change my decision from \\\"marginally above the acceptance threshold\\\" to \\\"accept, good paper\\\".\", \"questions\": \"Here, I am adding the questions that I find relevant for improving the work quality; some of them were already discussed in the Weaknesses section:\\n\\n- Could you clarify this a bit more in section 3.2? How is it important for the method?\\n\\n- Do you think the batch size can impact the results, especially the ones provided for the continual TTA?\\n\\nPlease consider answering the points on the weaknesses as well. Furthermore, I am open to discussion, and I think that the work has a good potential for the community.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thanks to Reviewer drM4\", \"comment\": \"We greatly appreciate your recognition of the core contributions and strengths of our work.\\n\\nIf you have any further questions or suggestions, please feel free to reach out. We are fully committed to addressing any concerns or providing additional clarifications.\\n\\nThank you once again for your constructive and thoughtful review.\\n\\nBest regards,\\n\\nThe Authors\"}", "{\"title\": \"Response to Reviewer NKQc (3/4)\", \"comment\": \"In response to your comments, we revisited the experiments in Table 4, varying the batch sizes to assess their impact. The results confirm that performance improvements achieved by TTE remain consistent across different batch sizes.\\n\\n### Averaged classification accuracy (\\\\%) with the change of batch size in natural distribution shifts (ResNet50-GN).\\n| | BS1 | BS4 | BS16 | BS64 |\\n|-|:-:|:-:|:-:|:-:|\\n| NoAdapt | 46.3 | 46.3 | 46.3 | 46.3 |\\n| DeYO | 50.6 | 50.6 | 50.8 | 49.2 |\\n| DeYO+TTE | **51.7(+1.1)** | **51.9(+1.3)** | **51.8(+1.0)** | **50.5(+1.3)** |\\n\\n### Averaged classification accuracy (\\\\%) with the change of batch size in natural distribution shifts (VitBase).\\n| | BS1 | BS4 | BS16 | BS64 |\\n|-|:-:|:-:|:-:|:-:|\\n| NoAdapt | 37.9 | 37.9 | 37.9 | 37.9 |\\n| DeYO | 58.2 | 58.2 | 58.4 | 58.1 |\\n| DeYO+TTE | **59.2(+1.0)** | **59.1(+0.9)** | **59.2(+0.8)** | **58.6(+0.5)** |\\n\\nIn the revised main manuscript, we have clarified the rationale behind selecting these architectures and included a reference to the recommended paper, which provides a solid foundation for our design choices, as follows: \\\"Architectures with batch normalization were excluded due to their batch size sensitivity and instability during the TTA process (Niu et al., 2023; Mounsaveng et al., 2024).\\\"\\n\\n> **W6. I would suggest revisiting some of the baselines for Tab 1. I would also consider a baseline with other methods of the local ensemble as well, such as SWA with TTA, and for Tab 4. I would also add other methods with TTE (maybe in the supp. material).**\\n\\nThank you for your insightful suggestions to enhance the quality of our paper. Following your comments, we conducted additional experiments to address these points:\\n\\n**Comparison with Stochastic Weight Averaging (SWA)**: To evaluate the effectiveness of the proposed adaptive weight-averaging method, we compared it with stochastic weight averaging (SWA) [4], which inspired TTE as one of offline generalization research. For this comparison, we created a TTE variant by replacing the proposed ensemble strategy with SWA while keeping all other components unchanged. SWA employs uniform averaging of SGD iterates (generated TTA models) and is implemented as: $w_{\\\\text{swa}}\\\\gets \\\\frac{w_{\\\\text{swa}}\\\\cdot n_{\\\\text{models}}+w}{n_{\\\\text{models}}+1}$. In response to reviewer comments, experiments were conducted using the scenario from Table 1, specifically the ImageNet-C under the Label Shifts setup. The results demonstrate that the proposed adaptive averaging scheme outperforms SWA, highlighting its adaptive mechanism advantages for online TTA compared to SWA's uniform averaging. The results are summarized as follows:\\n\\n### Comparison study between different weight averaging schemes. Averaged accuracy with ImageNet-C (Label Shifts).\\n| | ResNet50-GN | ViTBase |\\n|-|:-:|:-:|\\n| NoAdapt | 30.6 |29.9|\\n| DeYO | 40.8 | 61.3 |\\n| +TTE (SWA) | 47.4(+7.4) | 64.7(+3.4) |\\n| +TTE (Ours) | **50.7(+9.9)** | **65.2(+3.9)** |\\n\\n**Additional Baselines with TTE**: We revisited the experiments in Table 4 and included results for **Tent+TTE** and **SAR+TTE**. These experiments further validate the effectiveness of TTE, with results consistent with those observed with DeYO+TTE. The performance is summarized as follows:\\n\\n### Averaged classification accuracy (%) with natural distribution shifts (Label Shifts).\\n| | ResNet50-GN | ViTBase |\\n|-|:-:|:-:|\\n| NoAdapt | 46.3 | 37.9 |\\n| Tent | 47.1 | 39.8 |\\n| Tent+TTE | **48.1(+1.0)** | **52.7(+12.9)** |\\n| SAR | 47.0 | 43.8 |\\n| SAR+TTE | **47.8(+0.8)** | **52.6(+8.8)** |\\n| DeYO | 49.3 | 57.3 |\\n| DeYO+TTE | **50.3(+1.0)** | **57.9(+0.6)** |\\n\\n### Averaged classification accuracy (%) with natural distribution shifts (Batch Size 1).\\n| | ResNet50-GN | ViTBase |\\n|-|:-:|:-:|\\n| NoAdapt | 46.3 | 37.9 |\\n| Tent | 47.6 | 38.9 |\\n| Tent+TTE | **49.7(+2.1)** | **53.8(+14.9)** |\\n| SAR | 47.5 | 46.8 |\\n| SAR+TTE | 47.5(+0.0) | **51.0(+4.2)** |\\n| DeYO | 50.7 | 58.2 |\\n| DeYO+TTE | **51.7(+1.0)** | **59.2(1.0)** |\\n\\nThe additional experiments and their details have been included in **Table 12 and 13** of the revised manuscript.\\n\\n> **Q1. Could you clarify this a bit more in section 3.2? How is it important for the method?** \\n\\nPlease refer to the response to W2.\\n\\n> **Q2. Do you think the batch size can impact the results, especially the ones provided for the continual TTA?**\\n\\nPlease refer to the response to W5.\\n\\nAll responses will be included in the final revision. We are always open to further discussion, so if you have any additional concerns or suggestions, please do not hesitate to share them\\u2014we greatly value your feedback and are committed to improving the quality of our work.\"}", "{\"summary\": \"This paper introduces a novel test-time ensemble approach that can be seamlessly integrated with existing TTA models to enhance adaptation. Specifically, the proposed framework reduces domain gaps through two ensemble strategies: weight averaging of TTA models and dropout. Additionally, a knowledge distillation strategy is employed to mitigate both noise and bias for improving model robustness under different distribution shifts.\\nExtensive experiments are conducted in different TTA scenarios to demonstrate the superiority of the proposed method over existing baselines.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"1. Writing quality is good. The paper is well-structured, and clearly written.\\n2. Good insights. This paper explores TTA as a domain generalization problem, uncovering linear connectivity within TTA models. This perspective suggests that domain generalization techniques could enhance model representations for TTA tasks. \\n3. SOTA performance. The proposed method achieves the state-of-the-art performance via the integration with different TTA models in various scenarios.\\n4. Ablations. Ablation experiments are provided to verify the effectiveness of the proposed modules.\", \"weaknesses\": \"1. Although the results presented in Tables 3 show the performance improvement achieved by the proposed framework in the continual TTA scenario, it is unclear how the method enhances baseline performance in later adaptation stages. Additionally, I would like to know if the proposed method addresses the issue of catastrophic forgetting in this context.\", \"additional_question\": \"Is it possible to extend the proposed benchmark construction method to dense prediction tasks, such as semantic segmentation? It would be very meaningful if it can be applied to various tasks.\", \"questions\": \"Please refer to the weaknesses above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper considers a new problem, test-time ensemble (TTE), which aims at using multiple models generated during TTA. This paper first formulates the test-time ensemble problem. The paper also proposes the weight average and dropout as the baseline methods to evaluate the performances.\", \"the_contributions_can_be_summarized_as\": \"(1) The author revealed that TTA models exhibit linear mode connectivity, an in- triguing insight that simplifies and enhances the adaptation process.\\n\\n(2) The author introduced Test-Time Ensemble (TTE), a novel and computationally efficient approach that not only enriches model representations but also stabilize TTA optimization through de-biased and noise- robust knowledge distillation.\\n\\n(3) TTE integrated effortlessly with existing TTA methods, enhancing adaptation in diverse scenarios and showing potential for applicability to future TTA methods.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"(1) This paper proposes the new problem, test-time problem, which is different from previous test-time adaptations. I believe this problem has some practical applications.\\n\\n(2) The paper proposes some simple baseline methods that can effectively address the problem.\", \"weaknesses\": \"(1) The analysis of test-time adaptation does not inspire the new methods. The moving average ensemble methods are popularly adapted in self-supervised learning and ensemble methods. I consider the Linear Mode Connectivity theory should tell the reason and the situation that the models generated during test-time adaptation.\\n\\n\\n(2) Limited technical novelty: this paper proposes the two-branch structure and leverage the weight average to improve the performance. Similar techniques are implemented in https://github.com/huggingface/pytorch-image-models. I do not see anything new compared to what have been proved in image classification.\\n\\n(3) Unclear description. In section 3, the de-biased distillation subsection does not describe clearly where the bias comes from. I suggest the author should explain the bias again. Also, I can not understand what the connection between the spike phenomena of the accuracy curve and the bias.\", \"questions\": \"Please see the weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you very much to the authors for addressing my questions and resolving my concerns. Based on the current discussions, I have decided to maintain a positive rating.\"}", "{\"title\": \"Response to Reviewer xGvc (2/2)\", \"comment\": \"To further verify the technical novelty of the de-biasing scheme, we revisited the experiments in Table 3 and conducted with **Tent+TTE** and **SAR+TTE**. The scenario in Table 3 is challenging where conventional TTA models frequently suffer from collapse. The results are consistent with the original results of **DeYO+TTE** as follows.\\n\\n### Continual TTA with non-i.i.d. conditions. Average accuracy (\\\\%) with ImageNet-C.\\n| | ResNet50-GN | ViTBase |\\n|-|:-:|:-:|\\n| NoAdapt | 30.6 |29.9|\\n| Tent | 0.6 | 3.9 |\\n| Tent+TTE | **44.2(+43.6)** | **58.3(+54.4)** |\\n| SAR | 23.0 | 46.0 |\\n| SAR+TTE | **44.9(+21.9)** | **60.3(+14.3)** |\\n| DeYO | 2.8 | 53.5 |\\n| DeYO+TTE | **49.7(+46.9)** | **61.6(+8.1)** |\\n\\nThe final manuscript will clarify these contributions to address the reviewer concerns.\\n\\n> **W3. How to conduct the weight-space ensemble without adding computational complexity? Will the technique increase storage consumption?**\\n\\nWe appreciate the reviewer\\u2019s comment and agree that our original expression was too strong. We have revised the manuscript to state: \\u201creducing the computational burden of multiple model inference.\\u201d in the Introduction section.\\n\\n> **W4. The knowledge distillation-based debiasing and anti-noise strategies proposed in the paper may not be able to completely solve the problem of noisy test data. How to solve the scenario that the pseudo labels are incorrect?**\\n\\nWe agree with the reviewer\\u2019s comment that the proposed noise-robust distillation cannot completely prevent noisy pseudo label issues. However, we have both empirically and theoretically demonstrated that it effectively alleviates their impact in this paper. This is achieved through the use of reverse KL divergence, which reverses the order of the student and teacher representations. Mathematically, unlike standard KL divergence, the gradient of reverse KL divergence does not depend on the magnitude ratio of student and teacher predictions, making it less sensitive to noisy teacher predictions. These findings align with observations in supervised learning with noisy labels [1]. A detailed mathematical analysis of this noise-robust distillation approach is provided in **Appendix D.2**.\\n\\n> **Q1. Why the results of CoTTA in Continual TTA with non-i.i.d. conditions are only 2.2\\\\% and 3.4\\\\%?**\\n\\nCoTTA was originally developed for continual TTA applications. However, it does not account for the challenging scenarios of non-i.i.d. conditions, both in terms of distributions and classes. In this paper, we introduced such scenarios to evaluate the robustness of TTE, as these conditions make conventional TTA models particularly prone to collapse. The observed collapse in CoTTA under another challenging scenarios is consistent with findings reported in previous research [2].\\n\\nThe revised manuscript will incorporate all the responses. We deeply value your feedback and are eager to engage in further discussion. If you have any additional concerns or suggestions, please do not hesitate to let us know.\\n\\n> References\\n> 1. Yisen Wang, Xingjun Ma, Zaiyi Chen, Yuan Luo, Jinfeng Yi, and James Bailey. Symmetric cross entropy for robust learning with noisy labels. In Proceedings of the IEEE/CVF international conference on computer vision, pp. 322\\u2013330, 2019b.\\n> 2. Longhui Yuan, Binhui Xie, and Shuang Li. Robust test-time adaptation in dynamic scenarios. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 15922\\u201315932, 2023.\"}", "{\"title\": \"Response to Reviewer xGvc (1/2)\", \"comment\": \"We thank the reviewer for their thoughtful feedback and for recognizing the strengths of our work. Below, we address the concerns raised and clarify our contributions.\\n\\n> **W1. The TTE method involves multiple hyperparameters, such as the momentum coefficient , dropout ratio and temperature, which may affect the stability and generalization of the method. Further research on how to reduce the dependence on hyperparameters is crucial.**\\n\\nThanks for the reviewer highlighting the importance of hyperparameter sensitivity and its impact on the stability and generalization of TTE. We address this concern as follows:\\n\\n- **Demonstrated low hyperparameter sensitivity**: We would like to emphasize that TTE demonstrates low sensitivity to hyperparameters. As shown in **Figures 6 and 7** of the paper, TTE consistently outperforms the baseline across a range of hyperparameter settings. These results indicate that TTE maintains effectiveness even with approximately chosen hyperparameters.\\n\\n- **Uniform hyperparameter configuration**: To further address potential overfitting concerns, we intentionally avoided over-tuned hyperparameter configuration for specific tasks or benchmarks. Instead, we applied identical hyperparameter settings across all four benchmarks and all four test-time scenarios. This generality highlights the adaptability of TTE without reliance on tuning.\\n\\n- **Limitations**: Nevertheless, we recognize the reviewer\\u2019s valid point regarding the challenge of hyperparameter selection in test time. To reflect this, we have added a limitation section to the paper (**Appendix F**), explicitly discussing the difficulty of hyperparameter settings.\\n\\nThe final manuscript will incorporate all the responses. We hope this demonstrates our commitment to addressing hyperparameter concerns and enhancing the practical applicability of TTE.\\n\\n> **W2. TTE integrates many well-established techniques such as ensemble, dropout, knowledge distillation, which have been utilized in TTA or few shot learning. The combination has weaken the novelty of the paper, and the unique contributions of the paper should be classified.**\\n\\nWe would like to clarify the novelty of TTE through this response. Our methodological contributions are summarized in two key aspects: 1) adaptive ensemble strategies inspired by linear mode connectivity (Section 3.1) and 2) debiased and noise-robust knowledge distillation as a stable optimization strategy (Section 3.2). To address the reviewer concern, we classify the two technical novelty in TTE, which obviously distinguishes it from conventional methods.\\n\\n**1. Adaptive ensemble strategies**: TTE introduces adaptive ensemble strategies inspired by the linear mode connectivity property, which ensures the wide range of weight combinations from different models (**Figure 1**). Unlike static ensembling, our approach actively performs ensembling, particularly when TTA models are adapted to new distributions and their representation diverge. This adaptive mechanism is especially advantageous in **online TTA**, where incoming data distributions are unknown and may shift dynamically. As demonstrated in the Continual TTA experiments (**Figure 5**), these adaptive ensembles significantly outperform static ensembles by effectively leveraging representation diversity. \\n\\nWhile some elements of our framework may resemble previous techniques, we provide, for the first time, empirical evidence that the TTA process can effectively leverage ensemble techniques by demonstrating linear model connectivity during adaptation. To the best of our knowledge, this work represents the **first application of linear mode connectivity in TTA research**, introducing a novel adaptive ensemble method specifically tailored to this context. This new perspective on TTA optimization offers valuable benefits to this community.\\n\\n**2. De-biased distillation**: Unlike conventional knowledge distillation, our proposed de-biased distillation introduces a de-biasing mechanism to refine the ensemble representation before distillation. Practically, the de-biasing scheme identifies bias-guiding samples and adjusts their representation by reducing the bias measured during TTA process. It induces label smoothing effects and performs a **regularizer during TTA only relying on unsupervised entropy minimization**. As a result, TTE achieves stable optimization by 1) maintaining linear mode connectivity between TTA models and 2) mitigating model collapse, a persistent issue in TTA methods (often results in near-zero accuracy or catastrophic forgetting). Notably, extensive experiments in this paper (**Table 1-4**) demonstrate that TTE prevents collapse across four benchmark datasets and four test-time scenarios, where baseline methods frequently fail.\"}", "{\"title\": \"Response to Reviewer sF3M (2/2)\", \"comment\": \"Our methodological contributions are summarized in two key aspects: 1) adaptive ensemble strategies inspired by linear mode connectivity (Section 3.1) and 2) debiased and noise-robust knowledge distillation as a stable optimization strategy (Section 3.2). The second contributions of TTE address critical challenges in test-time adaptation (TTA). Below, we elaborate on these contributions to clarify their novelty:\\n\\n**Debiased and noise-robust knowledge distillation**: We propose robust knowledge distillation to address two degradation factors in ensemble representation before distillation: 1) **Bias** as a prominent aspect of model collapse and 2) **Noise** as prediction errors, to ensure the stability of unsupervised TTA optimization. Especially, we would like to emphasize the technical novelty of the **de-biased distillation** approach in addressing the **model collapse issues**, which is a persistent challenging in TTA. Briefly, the de-biasing scheme identifies bias-guiding samples and adjusts their representation by reducing the bias measured during TTA. It induces label smoothing effects and performs **a regularizer for TTA only relying on unsupervised entropy minimization**, thereby achieves stable optimization by mitigating model collapse that often results in near-zero accuracy or catastrophic forgetting. Notably, extensive experiments in this paper (**Table 1-4**) demonstrate that TTE prevents collapse across four benchmark datasets and four test-time scenarios, where baseline methods frequently fail.\\n\\nFor further clarity, we revisited the **continual TTA experiments** in Table 3, a challenging scenario where baseline methods often experience model collapse and catastrophic forgetting. Expanding on prior work with **DeYO+TTE**, we conducted additional experiments using **Tent+TTE** and **SAR+TTE** to validate the effectiveness of the debiasing scheme across different baselines. Additionally, we measured the accuracy on the source data immediately after each adaptation process to assess whether TTE effectively mitigates catastrophic forgetting. The results were consistent with those observed for DeYO, demonstrating that the de-biased distillation reliably prevents performance degradation while addressing critical gaps in existing TTA research.\\n\\n### Continual TTA with non-i.i.d. conditions. Average accuracy (\\\\%) with ImageNet-C.\\n| | ResNet50-GN | ViTBase |\\n|-|:-:|:-:|\\n| NoAdapt | 30.6 |29.9|\\n| Tent | 0.6 | 3.9 |\\n| Tent+TTE | **44.2(+43.6)** | **58.3(+54.4)** |\\n| SAR | 23.0 | 46.0 |\\n| SAR+TTE | **44.9(+21.9)** | **60.3(+14.3)** |\\n| DeYO | 2.8 | 53.5 |\\n| DeYO+TTE | **49.7(+46.9)** | **61.6(+8.1)** |\\n\\n### Comparison of preventing catastrophic forgetting on Continual TTA with non i.i.d. conditions. Average accuracy (\\\\%) with clean ImageNet.\\n| | ResNet50-GN | ViTBase |\\n|-|:-:|:-:|\\n| NoAdapt | 80.0 |78.0|\\n| Tent | 10.1 | 9.8 |\\n| Tent+TTE | **75.1(+65.0)** | **79.9(+70.1)** |\\n| SAR | 56.7 | 76.2 |\\n| SAR+TTE | **75.4(+18.7)** | **80.4(+4.2)** |\\n| DeYO | 8.1 | 38.4 |\\n| DeYO+TTE | **71.3(+63.2)** | **77.9(+39.5)** |\\n\\nThe final manuscript will include these distinctions and the additional experimental results in **Table 11 and Figure 8** to highlight their importance.\\n\\n> **W3. Unclear description. In section 3, the de-biased distillation subsection does not describe clearly where the bias comes from. I suggest the author should explain the bias again. Also, I can not understand what the connection between the spike phenomena of the accuracy curve and the bias.**\\n\\nWe appreciate your suggestion to clarify and improve Section 3. In the revised manuscript, we have thoroughly revised this section by:\\n- **Definition of Bias**: We define the degradation factor in ensemble output.\\n- **Explaining the Bias Problem**: We have added a detailed explanation of how biased outputs emerge during TTA and negatively impact adaptation performance. \\n- **Detailing the Proposed Distillation Scheme**: We provide a multi-step explanation of the proposed scheme, highlighting how it addresses bias for robust TTA optimization.\\n\\nBriefly, the bias represents a prominent aspect in model predictions when TTA models collapse. We interpreted the collapsing phenomenon through the lens of linear mode connectivity by monitoring loss surface barrier between two TTA models. By introducing a robust knowledge distillation, TTE prevents model collapse and maintains linear mode connectivity during TTA, ensuring ensemble benefits during optimization. We hope these updates address your concerns and enhance the overall presentation of the manuscript.\\n\\nWe are eager to engage in further discussion and would greatly appreciate any additional suggestions or concerns you might have. Your feedback is invaluable in refining and enhancing our work, and we are committed to addressing any remaining issues.\"}", "{\"title\": \"Response to Reviewer NKQc (4/4)\", \"comment\": \"> References\\n> 1. Shuaicheng Niu, Jiaxiang Wu, Yifan Zhang, Yaofo Chen, Shijian Zheng, Peilin Zhao, and Mingkui Tan. Efficient test-time model adaptation without forgetting. In International Conference on Machine Learning, pp. 16888\\u201316905. PMLR, 2022.\\n> 2. Shuaicheng Niu, Jiaxiang Wu, Yifan Zhang, Zhiquan Wen, Yaofo Chen, Peilin Zhao, and\\nMingkui Tan. Towards stable test-time adaptation in dynamic wild world. arXiv preprint\", \"arxiv\": \"1803.05407, 2018.\"}", "{\"title\": \"Response to Reviewer NKQc (1/4)\", \"comment\": \"We sincerely thank the reviewer for their thoughtful feedback, positive remarks on our work! We are also confident that addressing your comments will further enhance the quality and impact of our paper.\\n\\n> **W1. I would suggest the authors proofread again to avoid problems such as the following typo \\\"with lager and more complex\\\" -> \\\"with larger and more complex\\\" in the introduction or \\\"Adaptvie momentum\\\" -> \\\"Adaptive momentum.\\\"**\\n\\nThank you for pointing these out. We have carefully proofread the manuscript again and corrected all typographical errors, including the ones you noted. These changes ensure better readability and precision.\\n\\n> **W2. I am not so convinced by section 3.2, DE-BIASED AND NOISE-ROBUST KNOWLEDGE DISTILLATION. Could you clarify this a bit more in this section? And maybe make it more clear in the paper.**\\n\\nWe appreciate your suggestion to clarify and improve Section 3.2. In the revised manuscript, we have thoroughly revised this section by:\\n\\n- **Definition of Bias and Noise**: We define the two degradation factors in ensemble output: bias and noise.\\n- **Explaining the Bias Problem**: We describe how biased outputs arise during TTA, disrupt linear mode connectivity, and degrade adaptation performance.\\n- **Detailing the Proposed Distillation Scheme**: We provide a multi-step explanation of the proposed scheme, highlighting how it addresses bias and noise for robust TTA optimization.\\n\\nAs an extension to this clarification, we provide a brief summary of Section 3.2 and address the related reviewer question (**Q1: How important is this method?**). In this section, we propose de-biased and noise-robust knowledge distillation to address two degradation factors in ensemble representation before distillation: 1) **Bias** as a prominent aspect of model collapse and 2) **Noise** as prediction errors. The schemes do not directly improve adaptation capability but ensure **the stability of unsupervised TTA optimization**.\\n\\nEspecially, we would like to emphasize the technical novelty and effectiveness of the **de-biased distillation** approach in addressing the bias issues, which manifests as TTA models begin to collapse. Model collapse is a persistent challenge in TTA, where improperly biased predictions lead to all samples being misclassified into a single class. By reducing prediction bias, the proposed de-biasing scheme induces label smoothing effects and performs as a **regularizer for TTA only relying on unsupervised entropy minimization**. Therefore, TTE mitigates model collapse and stabilizes the adaptation process. Extensive experiments in our paper (**Table 1-4**) demonstrate that TTE consistently prevents collapse across four benchmark datasets and four test-time scenarios, where baseline methods frequently fail. These results underscore the robustness of our approach.\\n\\n> **W3. In Equation 5, there is no hyperparameter to balance the terms. I think it should be included, right?**\\n\\nThank you for this observation. You are correct that a hyperparameter to balance the terms could potentially be optimized to enhance adaptation performance in TTE. However, we chose to weight the two objective terms equally to avoid the challenges associated with hyperparameter selection during test time.\\n\\nTo address potential overfitting concerns, we deliberately avoided over-tuned hyperparameter configurations for specific tasks or benchmarks. Instead, we applied identical hyperparameter settings across all four benchmarks and four test-time scenarios. This approach underscores the generality and adaptability of TTE without reliance on task-specific tuning.\", \"we_have_clarified_this_in_the_revised_manuscript_with_the_following_explanation\": \"\\\"To avoid over-tuned hyperparameter configurations, the two objective terms are assigned equal weights.\\\"\"}", "{\"title\": \"Thanks to Reviewer NKQc\", \"comment\": \"We sincerely appreciate your recognition and acknowledgment of the strengths of our work.\\n\\nShould you have any additional questions or comments, please do not hesitate to share them with us. We are fully committed to addressing them in detail.\\n\\nThank you again for your constructive and thoughtful review.\\n\\nBest regards,\\n\\nThe Authors\"}", "{\"title\": \"Response to Reviewer drM4 (Dense Prediction Task)\", \"comment\": \"Following the reviewer's comments, we extended our experiments to semantic segmentation, a dense prediction task. Using the ViTBase architecture and a pretrained model from DINO v2 [1], we added a linear layer to create a simple baseline model to perform semantic segmentation. This model was fine-tuned on the COCO training set [2] for 10 epochs, achieving 56.0 mIoU on the original COCO validation set.\\n\\nTo simulate distribution shifts, inspired by [3], we reconstructed the COCO validation data with four representative shifts: noise, blur, brightness and JPEG. Using this shifted dataset, we conducted adaptation experiments with Tent and Tent+TTE to evaluate their performance. In this setting, we found that the conventional TTA approach (Tent) showed limited improvement and, in some cases, even degraded performance compared to the baseline model, and as a result, the integration of TTE also showed marginal benefits. We believe this exploration shows the unique challenges of TTA under dense prediction and will promote further research to address these challenges for this community. The detailed results are presented below.\\n\\n### Additional TTA experimental results for semantic segmentation with mIOU \\n| | Noise | Blur | Brightness | JPEG |\\n|-|:-:|:-:|:-:|:-:|\\n| NoAdapt | 39.2 |55.4| 53.1 | 35.3 |\\n| Tent | 39.4 | 55.4 | 53.1 | 34.4 |\\n| Tent+TTE | 39.5 | 55.5 | 53.1 | 35.3 |\\n\\nWe sincerely thank you once again for taking the time to provide your thoughtful and valuable feedback on our work. Due to the limited duration of the rebuttal phase, we kindly ask if you have any post-rebuttal feedback that could help us further enhance the quality of our paper.\\n\\n> reference\\n> 1. Oquab, Maxime, et al. DINOv2: Learning Robust Visual Features without Supervision, TMLR 2024.\\n> 2. Lin, Tsung-Yi, et al. Microsoft coco: Common objects in context. ECCV 2014.\\n> 3. Hendrycks, Dan, and Thomas Dietterich. Benchmarking neural network robustness to common corruptions and perturbations. ICLR 2019.\"}", "{\"title\": \"Kind Reminder: Looking Forward to Your Post-Rebuttal Feedback\", \"comment\": \"We sincerely thank you once again for taking the time to provide your thoughtful and valuable feedback on our work. Due to the limited duration of the rebuttal phase, we kindly ask if you have any post-rebuttal feedback that could help us further enhance the quality of our paper.\", \"as_a_summary_of_our_previous_response\": \"- We clarified how the proposed adaptive ensemble scheme is motivated by our discovery of linear mode connectivity in TTA. Furthermore, we performed additional analyses to validate the linear mode connectivity property across various potential TTA scenarios, reinforcing the motivation behind TTE. \\n- We also clarified another technical novelty: the robust knowledge distillation approach. To further demonstrate its effectiveness, we additionally performed the experiments using other baseline models (i.e., Tent+TTE and SAR+TTE).\\n- Following the reviewer\\u2019s comments, we thoroughly revised the method section of the main manuscript to enhance clarity and improve understanding.\\n\\nWe deeply value your feedback and are confident that it has significantly contributed to improving the quality of this paper and strengthening our work.\\n\\nBest regards,\\n\\nThe Authors\"}", "{\"metareview\": \"The paper proposes a method called Test-Time Ensemble (TTE), which enriches model representations during online test-time adaptation (TTA) inspired by domain generalization. The proposed method is well-motivated and technically solid. The experimental results are extensive and convincing. The authors successfully addressed the major concerns raised by the reviewers about the technical details, novelty, possible extension, experiments, and writing during the rebuttal. All reviewers agree to accept this paper. Overall, it is a technically solid paper that meets the expectations of the ICLR community.\", \"additional_comments_on_reviewer_discussion\": \"The major concerns raised by the reviewers include technical details, novelty, possible extension, experiments, and writing. The rebuttal successfully addresses these issues. At the end of the discussion, all reviewers agree to accept this paper.\"}", "{\"comment\": \"Hello, I would like to thank the authors for their detailed answers and additional clarifications about the work. I think that all my questions were addressed, and I am comfortable with the answers. Thus, I am changing my score from \\\"marginally above the acceptance threshold\\\" to \\\"accept, good paper\\\".\"}", "{\"title\": \"Kind Reminder: Looking Forward to Your Post-Rebuttal Feedback\", \"comment\": \"We sincerely thank you once again for taking the time to provide your thoughtful and valuable feedback on our work. Due to the limited duration of the rebuttal phase, we kindly ask if you have any post-rebuttal feedback that could help us further enhance the quality of our paper.\", \"summary_of_the_previous_responses\": [\"The low hyperparameter sensitivity of TTE was clarified, and a discussion of the challenges associated with hyperparameter tuning during test-time was included as a limitation in response to the reviewer\\u2019s concern.\", \"The two key technical contributions of TTE were further clarified, and the manuscript was thoroughly revised to emphasize its distinct contributions.\", \"Additional experiments were conducted with other baseline models (Tent and SAR) to further validate the stability of TTE.\", \"The discussion of computational costs has been softened to provide a more balanced perspective.\", \"The proposed knowledge distillation was clarified to explain how it alleiviates the impacts of noisy predictions.\", \"We deeply value your feedback and are confident that it has significantly contributed to improving the quality of this paper and strengthening our work.\", \"Best regards,\", \"The Authors\"]}", "{\"summary\": \"The paper proposes a method called Test-Time Ensemble (TTE), which uses an ensemble strategy to dynamically enrich model representations during online test-time adaptation (TTA). TTE constructs an ensemble network by averaging the parameter weights of different TTA models, which are continuously updated using test data. This weight averaging technology captures model diversity and improves representation quality without increasing the computational burden of managing multiple models. TTE further combines dropout to promote diverse collaboration of representations within TTA models, and also proposes a debiased and noise-resistant knowledge distillation scheme to stabilize the learning of TTA models in the ensemble. TTE can be seamlessly integrated with existing TTA methods, enhancing their adaptive capabilities in various challenging scenarios.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. TTE utilizes an ensemble strategy to dynamically enrich model representations during online test-time adaptation (TTA), which is an interesting approach.\\n2. TTE constructs an ensemble network by averaging the parameter weights of different TTA models, and this weight averaging captures model diversity, improving representation quality without increasing the computational burden of managing multiple models.\\n3. TTE further promotes the diversity of representations within TTA models by combining with dropout, and proposes a debiased and noise-resistant knowledge distillation scheme to stabilize the learning of TTA models in the ensemble.\\n4. The experiments are extensive and the results are superior to other compared methods.\", \"weaknesses\": \"1. The TTE method involves multiple hyperparameters, such as the momentum coefficient , dropout ratio and temperature, which may affect the stability and generalization of the method. Further research on how to reduce the dependence on hyperparameters is crucial.\\n2. TTE integrates many well-established techniques such as ensemble, dropout, knowledge distillation, which have been utilized in TTA or few shot learning. The combination has weaken the novelty of the paper, and the unique contributions of the paper should be classified.\\n3. How to conduct the weight-space ensemble without adding computational complexity? Will the technique increase storage consumption?\\n4. The knowledge distillation-based debiasing and anti-noise strategies proposed in the paper may not be able to completely solve the problem of noisy test data. How to solve the scenario that the pseudo labels are incorrect?\", \"questions\": \"1. Why the results of CoTTA in Continual TTA with non-i.i.d. conditions are only 2.2% and 3.4%?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thanks for your detailed response. I have gone through all reviews. Now I think my major concerns are addressed. Therefore, I raise the rating a little bit.\"}", "{\"title\": \"Response to Reviewer drM4\", \"comment\": \"We sincerely thank the reviewer for their constructive feedback and for recognizing the strengths of our work. We hope to address your comments effectively and further improve the quality of this paper.\\n\\n> **W1. Although the results presented in Table 3 show the performance improvement achieved by the proposed framework in the continual TTA scenario, it is unclear how the method enhances baseline performance in later adaptation stages. Additionally, I would like to know if the proposed method addresses the issue of catastrophic forgetting in this context.**\", \"the_technical_contributions_of_tte_can_be_summarized_in_two_key_aspects\": \"1) introducing adaptive ensemble strategies from the perspective of linear mode connectivity (Section 3.1), and 2) incorporating debiased and noise-robust knowledge distillation (KD) as a stable optimization strategy during TTA (Section 3.2). The experiments in Table 3, which had a challenging scenario where TTA models are prone to collapse, were introduced to evaluate the **stable optimization in TTE**, induced by the second contribution.\\n\\nBriefly, the proposed distillation method not only maintains linear mode connectivity during TTA but also effectively alleviates model collapse, a common issue that conventional methods still suffer from (resulting in near-zero accuracy in Table 3 or catastrophic forgetting). Notably, the extensive experiments (Table 1-4) demonstrate that the integration of TTE consistently avoids collapsing across four benchmark datasets and four test-time scenarios.\\n\\nTo improve clarity, we have thoroughly revised the sections discussing the contributions and conducted additional experiments based on the reviewer\\u2019s comments. The details of our revisions are as follows:\\n\\n**Additional experiments for robustness**: To validate the robustness of TTE further, we revisited the experiments in Table 3 and conducted experiments with other baselines integrated with TTE (e.g., **Tent+TTE** and **SAR+TTE**). The results were consistent with the results of **DeYO+TTE** in Table 3, as follows:\\n\\n### Continual TTA with non-i.i.d. conditions. Average accuracy (%) with ImageNet-C.\\n| | ResNet50-GN | ViTBase |\\n|-|:-:|:-:|\\n| NoAdapt | 30.6 |29.9|\\n| Tent | 0.6 | 3.9 |\\n| Tent+TTE | **44.2(+43.6)** | **58.3(+54.4)** |\\n| SAR | 23.0 | 46.0 |\\n| SAR+TTE | **44.9(+21.9)** | **60.3(+14.3)** |\\n| DeYO | 2.8 | 53.5 |\\n| DeYO+TTE | **49.7(+46.9)** | **61.6(+8.1)** |\\n\\n**Additional experiments for catastrophic forgetting**: Following your comments, we concurrently measured the accuracy on clean ImageNet dataset immediately after each adaptation process in the above experiments. The results show that the integration with TTE successfully avoids forgetting issues, while other baseline methods often suffer from it, as follows.\\n\\n### Comparison of preventing catastrophic forgetting on Continual TTA with non i.i.d. conditions. Average accuracy (%) with clean ImageNet..\\n| | ResNet50-GN | ViTBase |\\n|-|:-:|:-:|\\n| NoAdapt | 80.0 |78.0|\\n| Tent | 10.1 | 9.8 |\\n| Tent+TTE | **75.1(+65.0)** | **79.9(+70.1)** |\\n| SAR | 56.7 | 76.2 |\\n| SAR+TTE | **75.4(+18.7)** | **80.4(+4.2)** |\\n| DeYO | 8.1 | 38.4 |\\n| DeYO+TTE | **71.3(+63.2)** | **77.9(+39.5)** |\\n\\n**Writing revision**: We have clarified the experimental results in **Section: Continual TTA with non-i.i.d. conditions** for Table 3 and rewritten **Section 3.2** to provide a clearer explanation of the robust knowledge distillation approach. The details of the additional experiments have been included in **Figure 8** and **Table 12** in the final revision.\\n\\n> **W2. Is it possible to extend the proposed benchmark construction method to dense prediction tasks, such as semantic segmentation? It would be very meaningful if it can be applied to various tasks.**\\n\\nWe appreciate the reviewer\\u2019s insightful suggestion regarding the applicability of TTE. Importantly, the proposed TTE framework does not rely on strategies specific to image classification, which suggests that expanding its applicability to dense prediction tasks should be feasible. While we cannot provide experimental results at this time, we are actively preparing experiments to explore this direction and will include them if feasible within the rebuttal period.\\n\\nWe are open to further discussion and welcome any additional concerns or suggestions you may have. Please feel free to share your thoughts, as we are committed to improving the quality of our work.\"}", "{\"title\": \"Response to Reviewer sF3M (1/2)\", \"comment\": \"Thank you for recognizing the strengths of our work, including the proposal of new problem and its practical applications. We also appreciate your thoughtful feedback, which highlights areas where further clarification can enhance the paper. Below, we address your comments and questions, aiming to fully resolve your concerns.\\n\\n> **W1. The analysis of test-time adaptation does not inspire the new methods. The moving average ensemble methods are popularly adapted in self-supervised learning and ensemble methods. I consider the Linear Mode Connectivity theory should tell the reason and the situation that the models generated during test-time adaptation.**\\n\\nThank you for raising this concern. We would like to clarify the motivation and methodological novelty of our work and how it builds upon the insight from linear mode connectivity (LMC).\\n\\nLinear mode connectivity reveals that TTA models, even when adapted to different distributions, can be **weight-averaged across a wide range of combinations** without degrading performance. As shown in **Figure 1**, we empirically validate this property by demonstrating that weight-averaging with TTA models, adapted to distinctly different distributions (noise and blur), consistently maintain or improve performance.\\n\\nBuilding on this insight, we propose **adaptive ensemble strategies** that dynamically determine the weight-averaging coefficient (i.e., momentum) to construct ensembles that better leverage the representation diversity of TTA models. This adaptive scheme is particularly valuable in **online TTA**, where data distributions are unknown and subject to dynamic shifts. In **Figure 5**, the Continual TTA experiments demonstrate that our adaptive ensembles significantly outperform static ensembles by effectively incorporating the representation diversity across models.\\n\\nWhile some elements of our framework may resemble previous techniques, we provide, for the first time, empirical evidence that the TTA process can effectively leverage ensemble techniques by demonstrating linear model connectivity during adaptation. To the best of our knowledge, this work represents the **first application of linear mode connectivity in TTA research**, introducing a novel adaptive ensemble method tailored to this context. This new perspective on TTA optimization offers valuable benefits to this community.\\n\\nHowever, we acknowledge that our initial study did not exhaustively evaluate all potential situations in the TTA process. To address this concern, we have conducted additional analyses to verify the robustness of LMC under more diverse conditions. Building on our previous work with two TTA models, we extended the evaluation to scenarios involving four TTA models and also considered continual TTA conditions. The results provide that ensembles generally maintain or improve performance in these expanded settings, demonstrating that linear mode connectivity remains valid, as detailed below.\\n\\n### LMC analysis with four TTA models. Averaged classification accuracy (\\\\%) with ImageNet-C.\\n| | Target shifts | Non-target shifts |\\n|-|:-:|:-:|\\n| No Adapt. | 24.6 | 31.8 |\\n| TTA (Gauss) | 37.7 | 47.1 |\\n| TTA (Defocus) | 40.3 | 43.2 |\\n| TTA (Snow) | 38.7 | 44.0 |\\n| TTA (Contrast) | 38.6 | 40.4 |\\n| Ensemble (G+D+S+C) | **45.5** | **47.6** |\\n\\n### LMC analysis with continual TTA processes. Averaged classification accuracy (\\\\%) with ImageNet-C.\\n| | Target shifts | Non-target shifts |\\n|-|:-:|:-:|\\n| No Adapt. | 24.6 | 31.8 |\\n| TTA (G$\\\\rightarrow$D$\\\\rightarrow$S$\\\\rightarrow$C) | 49.4 | 49.6 |\\n| Ensemble (G+D+S+C) | **50.7** | **51.5** |\\n| TTA (C$\\\\rightarrow$G$\\\\rightarrow$D$\\\\rightarrow$S) | 45.3 | 46.8 |\\n| Ensemble (C+G+D+S) | **52.2** | **51.1** |\\n| TTA (S$\\\\rightarrow$C$\\\\rightarrow$G$\\\\rightarrow$D) | **54.6** | **52.3** |\\n| Ensemble (S+C+G+D) | 53.1 | 51.6 |\\n| TTA (D$\\\\rightarrow$S$\\\\rightarrow$C$\\\\rightarrow$G) | 0.2 | 1.6 |\\n| Ensemble (D+S+C+G) | **7.4** | **20.0** |\\n\\nTo clarify the motivation of TTE, all the response will be included in the final revised manuscript and the details of the additional experiments will be included in **Table 7 and 8**. We hope this addresses your concerns and highlights the novelty and applicability of our method.\\n\\n> **W2. Limited technical novelty: this paper proposes the two-branch structure and leverage the weight average to improve the performance. Similar techniques are implemented in https://github.com/huggingface/pytorch-image-models. I do not see anything new compared to what have been proved in image classification.**\\n\\nExpanding to the previous response to W1, we would like to further emphasize the distinct contributions of TTE, which go beyond conventional methods.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"General Response\", \"comment\": [\"We thank the reviewers for their thoughtful feedback and valuable insights. Your comments have helped us identify areas to clarify and strengthen our work. Below, we summarize the key strengths of our contribution, recognized by reviewers.\", \"### **Summarized contributions recognized by the reviewers.**\", \"The problem formulation, **exploring TTA as a domain generalization problem**, is well-motivated [drM4, NKQc] and is initially proposed in this paper [sF3M].\", \"**Uncovering linear mode connectivity within TTA models** provides valuable insights [drM4] and enables the development of an **adaptive ensemble strategy** for improving model representations during online TTA [drM4, xGvc].\", \"TTE also introduces a **debiased and noise-robust knowledge distillation scheme** that stabilizes TTA optimization [xGvc].\", \"TTE achieves **state-of-the-art (SOTA) performance**, surpassing conventional TTA models [drM4, xGvc], supported by well-motivated experimental results [NKQc].\", \"**Extensive ablation studies** validate the effectiveness of the proposed modules and highlight their contributions to the overall performance [drM4, xGvc].\", \"TTE is **practical, easy to implement with baseline models** [NKQc, sF3M], and beneficial to the research community [NKQc].\", \"### **Major revisions in the updated manuscript**\", \"Following the reviewers' suggestions, we have thoroughly revised the paper and uploaded the updated version, which we believe has significantly enhanced the quality of this paper. Here, we would like to highlight the major revisions, as follows.\", \"#### **Introduction**\", \"The introduction has been revised to clarify the concept of TTE as an adaptive weight ensemble scheme.\", \"The discussion of computational costs has been softened to provide a more balanced perspective.\", \"#### **Preliminaries**\", \"Additional analysis has been conducted to validate the property of linear mode connectivity under various potential TTA scenarios, further supporting the motivation for TTE.\", \"#### **Methods**\", \"The distinct contributions of adaptive ensemble strategies and their significance in online TTA have been clarified and are now more prominently presented in the revised paper.\", \"Section 3.2 has been revised to improve the clarity and understanding of the proposed knowledge distillation method.\", \"#### **Experiments**\", \"Additional experiments were conducted with other baseline models (Tent[1]+TTE and SAR[2]+TTE). We revisited the experiments in Tables 3 and 4 to further validate the effectiveness of TTE.\", \"Additional experiments under continual TTA with non-i.i.d. conditions demonstrated the stability of TTE and its ability to prevent catastrophic forgetting.\", \"Additional comparison studies with stochastic weight averaging [3], a prior method for offline domain generalization, were performed to highlight the advantages of TTE.\", \"Furthermore, we will incorporate all the responses provided during the discussion phase into the final manuscript. For details, please refer to our responses to each reviewer\\u2019s comments.\", \"> References\", \"> 1. Dequan Wang, Evan Shelhamer, Shaoteng Liu, Bruno Olshausen, and Trevor Darrell. Tent: Fully test-time adaptation by entropy minimization. International Conference on Learning Representations (ICLR) 2020.\", \"> 2. Shuaicheng Niu, Jiaxiang Wu, Yifan Zhang, Zhiquan Wen, Yaofo Chen, Peilin Zhao, and Mingkui Tan. Towards stable test-time adaptation in dynamic wild world. International Conference on Learning Representations (ICLR) 2023.\", \"> 3. Pavel Izmailov, Dmitrii Podoprikhin, Timur Garipov, Dmitry Vetrov, and Andrew Gordon Wil- son. Averaging weights leads to wider optima and better generalization. Conference on Uncertainty in Artificial Intelligence (UAI) 2018.\"]}", "{\"title\": \"Thanks to Reviewer sF3M\", \"comment\": \"We greatly appreciate your recognition of the core contributions and strengths of our work.\\n\\nIf you have any further questions or suggestions, please feel free to reach out. We are fully committed to addressing any concerns or providing additional clarifications.\\n\\nThank you once again for your constructive and thoughtful review.\\n\\nBest regards,\\n\\nThe Authors\"}" ] }
4w99NAikOE
IterComp: Iterative Composition-Aware Feedback Learning from Model Gallery for Text-to-Image Generation
[ "Xinchen Zhang", "Ling Yang", "Guohao Li", "YaQi Cai", "xie jiake", "Yong Tang", "Yujiu Yang", "Mengdi Wang", "Bin CUI" ]
Advanced diffusion models like Stable Diffusion 3, Omost, and FLUX have made notable strides in compositional text-to-image generation. However, these methods typically exhibit distinct strengths for compositional generation, with some excelling in handling attribute binding and others in spatial relationships. This disparity highlights the need for an approach that can leverage the complementary strengths of various models to comprehensively improve the composition capability. To this end, we introduce IterComp, a novel framework that aggregates composition-aware model preferences from multiple models and employs an iterative feedback learning approach to enhance compositional generation. Specifically, we curate a gallery of six powerful open-source diffusion models and evaluate their three key compositional metrics: attribute binding, spatial relationships, and non-spatial relationships. Based on these metrics, we develop a composition-aware model preference dataset comprising numerous image-rank pairs to train composition-aware reward models. Then, we propose an iterative feedback learning method to enhance compositionality in a closed-loop manner, enabling the progressive self-refinement of both the base diffusion model and reward models over multiple iterations. Detailed theoretical proof demonstrates the effectiveness of this method. Extensive experiments demonstrate our significant superiority over previous methods, particularly in multi-category object composition and complex semantic alignment. IterComp opens new research avenues in reward feedback learning for diffusion models and compositional generation. Code: https://github.com/YangLing0818/IterComp
[ "Compositional text-to-image generation", "Feedback learning for diffusion model" ]
Accept (Poster)
https://openreview.net/pdf?id=4w99NAikOE
https://openreview.net/forum?id=4w99NAikOE
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zeLnuVrNK4", "vrB5BYhmAr", "ujOFBy7hDX", "tBBby97ctz", "t6hsJ0OzXn", "nwxUTihai1", "nvB67hq7hk", "fhbaP9qgC7", "cMslX3Q7jw", "YzrBVpC5oz", "W9Ukx91nQH", "UjBU4nsmso", "Tt7dJnVIYF", "TMNCVTeTNc", "Rm2657qk97", "NSEiMS5oRB", "Kwbr1r343o", "JczRzuiErP", "IDlsChw4xr", "ENKeelceX2", "DdMz02Gvd7", "DZnhJko3JT", "Aa5ApLQ1rk", "9c7ULpBeQU", "9E2HkiQ3B1", "5o5RV8qVME" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "meta_review", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_review" ], "note_created": [ 1732217305657, 1732216836175, 1732217392050, 1730008854771, 1732271495484, 1733718440639, 1729681542737, 1730597517416, 1732216791476, 1732216471841, 1732277930596, 1732217425289, 1732279629785, 1732566478747, 1732217093051, 1732216634379, 1732216245935, 1732216356095, 1732272488550, 1730713806486, 1732216576609, 1732216429541, 1732578835711, 1732215954379, 1737523494810, 1730684799782 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission2273/Authors" ], [ "ICLR.cc/2025/Conference/Submission2273/Authors" ], [ "ICLR.cc/2025/Conference/Submission2273/Authors" ], [ "ICLR.cc/2025/Conference/Submission2273/Reviewer_bbqi" ], [ "ICLR.cc/2025/Conference/Submission2273/Reviewer_ocHL" ], [ "ICLR.cc/2025/Conference/Submission2273/Area_Chair_5gWS" ], [ "ICLR.cc/2025/Conference/Submission2273/Reviewer_ocHL" ], [ "ICLR.cc/2025/Conference/Submission2273/Reviewer_HHze" ], [ "ICLR.cc/2025/Conference/Submission2273/Authors" ], [ "ICLR.cc/2025/Conference/Submission2273/Authors" ], [ "ICLR.cc/2025/Conference/Submission2273/Reviewer_bbqi" ], [ "ICLR.cc/2025/Conference/Submission2273/Authors" ], [ "ICLR.cc/2025/Conference/Submission2273/Authors" ], [ "ICLR.cc/2025/Conference/Submission2273/Reviewer_52Cf" ], [ "ICLR.cc/2025/Conference/Submission2273/Authors" ], [ "ICLR.cc/2025/Conference/Submission2273/Authors" ], [ "ICLR.cc/2025/Conference/Submission2273/Authors" ], [ "ICLR.cc/2025/Conference/Submission2273/Authors" ], [ "ICLR.cc/2025/Conference/Submission2273/Authors" ], [ "ICLR.cc/2025/Conference/Submission2273/Reviewer_Vtxw" ], [ "ICLR.cc/2025/Conference/Submission2273/Authors" ], [ "ICLR.cc/2025/Conference/Submission2273/Authors" ], [ "ICLR.cc/2025/Conference/Submission2273/Authors" ], [ "ICLR.cc/2025/Conference/Submission2273/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission2273/Reviewer_52Cf" ] ], "structured_content_str": [ "{\"title\": \"Response to Reviewer ocHL (Part 2/4)\", \"comment\": \"**Q4: It would be valuable to examine Diffusion-DPO's performance when trained on the collected dataset. Currently, Diffusion-DPO is trained on the pick-a-pic dataset, which is larger but lacks compositional details. These results would be necessary to evaluate the proposed method's effectiveness using consistent standards.**\", \"a4\": \"Since Diffusion-DPO lacks an explicitly trained reward model, it cannot accurately rank the images generated by its optimized base model relative to those in the original model gallery. As a result, Diffusion-DPO is unable to perform iterative feedback learning. Therefore, we only trained Diffusion-DPO for a single round using the original dataset and compared its performance against IterComp after first round of training:\\n\\n| Model | Color$\\\\uparrow$ | Shape$\\\\uparrow$ | Texture$\\\\uparrow$ | Spatial$\\\\uparrow$ | Non-Spatial$\\\\uparrow$ | Complex$\\\\uparrow$ |\\n| -------------------------- | --------------- | --------------- | ----------------- | ----------------- | --------------------- | ----------------- |\\n| SDXL | 0.6369 | 0.5408 | 0.5637 | 0.2032 | 0.3110 | 0.4091 |\\n| Diffusion-DPO(first round) | 0.6822 | 0.5691 | 0.6224 | 0.2216 | 0.3291 | 0.4263 |\\n| IterComp(first round) | 0.7239 | 0.6083 | 0.6940 | 0.2692 | 0.3308 | 0.4572 |\\n| IterComp | **0.7982** | **0.6217** | **0.7683** | **0.3196** | **0.3371** | **0.4873** |\\n\\nFrom the results, it is evident that IterComp effectively leverages the model gallery to collect composition-aware model preferences, enabling it to learn diverse compositional dimensions during the first round of training. Notably, attributes such as color, texture, and spatial relationships exhibit significant improvement. Compared to Diffusion-DPO, the lack of an explicitly trained reward model leads to mixed training on image pairs with differing preferences, which weakens the model's ability to learn specific compositional preferences. For instance, its improvements in spatial relationships are notably limited, highlighting that the DPO approach is not well-suited for complex tasks like compositional generation. Moreover, IterComp\\u2019s iterative optimization approach delivers substantial advantages across all metrics, further underscoring the effectiveness of iterative feedback learning in addressing complex compositional tasks.\\n\\n**Q5: Expert ranking may inadvertently include aesthetic information [This observation is meant to prompt discussion rather than highlight a weakness, as the underlying cause remains unclear]: When IterComp is applied, the aesthetic score improves, suggesting that reward models can interpret image aesthetics, since reward maximization leads to better aesthetic scores. This could be because either expert rankings are influenced by image aesthetics (beyond compositional attributes) or because models with better composition naturally generate more aesthetic images (for instance, FLUX, being the best compositional model, likely produces more aesthetically pleasing outputs).**\", \"a5\": \"This is a valuable question. We would like to emphasize that the aesthetic predictor was trained using the Aesthetic Visual Analysis (AVA) and Simulacra Aesthetic Captions (SAC) datasets, which are annotated by experts or photography enthusiasts. It is important to note that these annotators tend to be more critical of images with unrealistic or awkward compositions. The aesthetic predictor is designed to filter for high-quality images that excel in both detail representation and composition. Thus, **reasonable composition is a key aspect of aesthetics, and models with better composition naturally generate more aesthetically pleasing images**. Through multiple iterations of self-refinement, IterComp achieves highly reasonable compositions, resulting in a high degree of aesthetic quality.\\n\\n\\n**Q6: The paper uses 40 inference steps for all models to ensure fairness. However, some models can generate samples with fewer steps; for example, FLUX-dev uses 28 steps by default.**\", \"a6\": \"Through extensive testing, we found that models like IterComp, FLUX, and SDXL are highly robust to the number of inference steps, with their performance being minimally affected. However, InstanceDiffusion is more sensitive to the number of steps. Therefore, for metric evaluation and visualization, we used 20 steps for the other models but 40 steps for InstanceDiffusion. To ensure a fair comparison in terms of inference speed, we set the number of inference steps to 40 for all models. However, this does not affect IterComp's ability to generate higher-quality images at a faster speed compared to other methods.\"}", "{\"title\": \"Response to Reviewer bbqi (Part 2/2)\", \"comment\": \"**Q3: An additional experiment focusing on the first-round fine-tuned model\\u2014trained solely with human-annotated data\\u2014would be valuable. This would clarify the necessity and impact of the iterative training approach.**\", \"a3\": \"Thank you for your suggestions. We tested the first-round fine-tuned model and also conducted a first-round test on Diffusion-DPO using the same dataset. The results are as follows:\\n\\n| Model | Color$\\\\uparrow$ | Shape$\\\\uparrow$ | Texture$\\\\uparrow$ | Spatial$\\\\uparrow$ | Non-Spatial$\\\\uparrow$ | Complex$\\\\uparrow$ |\\n| -------------------------- | --------------- | --------------- | ----------------- | ----------------- | --------------------- | ----------------- |\\n| SDXL | 0.6369 | 0.5408 | 0.5637 | 0.2032 | 0.3110 | 0.4091 |\\n| Diffusion-DPO(first round) | 0.6822 | 0.5691 | 0.6224 | 0.2216 | 0.3291 | 0.4263 |\\n| IterComp(first round) | 0.7239 | 0.6083 | 0.6940 | 0.2692 | 0.3308 | 0.4572 |\\n| IterComp | **0.7982** | **0.6217** | **0.7683** | **0.3196** | **0.3371** | **0.4873** |\\n\\nFrom the results, it is evident that IterComp effectively leverages the model gallery to collect composition-aware model preferences, enabling it to learn diverse compositional dimensions during the first round of training. Notably, attributes such as color, texture, and spatial relationships exhibit significant improvement. Compared to Diffusion-DPO, the lack of explicitly trained reward models leads to mixed training on image pairs with multiple preferences, which weakens the model's ability to learn specific compositional preferences. For instance, its improvements in spatial relationships are notably limited, highlighting that the DPO approach is not well-suited for complex tasks like compositional generation. Moreover, IterComp\\u2019s iterative optimization approach delivers substantial advantages across all metrics, further underscoring the effectiveness of iterative feedback learning in addressing complex compositional tasks.\\n\\n\\n\\n**Q4: Based on my experience, RLHF in diffusion models can often be unstable. I\\u2019m curious whether your method consistently produces stable results or if there\\u2019s a risk of occasionally poorer outcomes. I\\u2019m concerned that the iterative training process might lead to overfitting on biases present in the reward models, potentially reducing overall robustness.**\", \"a4\": \"This is an excellent question. Regarding the stability of IterComp, we conducted experiments **in the appendix A.3 of the updated paper**. We selected five baseline methods\\u2014SD1.5, SDXL, InstanceDiffusion, Diffusion-DPO, and FLUX\\u2014along with two evaluation metrics: Complex and CLIP-Score. Using the same 50 random seeds, we calculated the mean and variance of each model under these two metrics. To visualize stability, we used the variance of each algorithm as the radius and scaled it uniformly by a factor of 10^4 for clarity.\\n\\nIn terms of compositional stability, **as shown in Figure 8 (a)**, we found that IterComp not only achieved the best overall performance but also exhibited the highest stability. This is attributed to the iterative optimization process, which refines the model by analyzing and improving its output samples during each iteration. The iterative training approach allows the model to perform feedback learning on its own generated outputs, rather than relying solely on external datasets. This significantly enhances the model\\u2019s stability. When evaluating stability in terms of realism and generation quality, **as shown in Figure 8 (b)**, our method also achieved the best stability. These findings demonstrate that the iterative training approach not only significantly improves model performance but also greatly enhances its stability.\\n\\nTo address the issue of overfitting, we adopted a targeted approach for training the reward model during each iteration. Specifically, we only fine-tuned the reward model using image pairs that include newly generated samples. For example, if the newly generated image is $x_0$, the reward model is fine-tuned on datasets such as $(x_0, x_{SDXL}), (x_0, x_{FLUX})$, while excluding repeated training on pairs like $(x_{SDXL}, x_{FLUX})$. This approach effectively prevents overfitting during multiple iterations while enabling the reward model to better learn composition-aware model preferences. As a result, the reward model can consistently refine and improve the optimized model in each iteration, achieving both self-correction and self-improvement.\"}", "{\"title\": \"Response to Reviewer ocHL (Part 3/4)\", \"comment\": \"**Q7: Diffusion-DPO operates in the latent space without requiring image decoding. In contrast, IterComp requires image decoding for the reward models (though this could be avoided by training the reward model with latent space inputs), likely resulting in slower training. Additional commentary on the training scheme would be valuable.**\", \"a7\": \"Here is an approximate listing of the training time for each stage (all our experiments were conducted on 4*NVIDIA A100-80G GPUs):\\n\\n| Phrase | Training Time |\\n| ---------------------------------------- | ------------- |\\n| Train reward models (iteration1) | 31min |\\n| Train base diffusion models (iteration1) | 3h 37min |\\n| Train reward models (iteration2) | 14min |\\n| Train base diffusion models (iteration2) | 1h 31min |\\n| Train reward models (iteration3) | 17min |\\n| Train base diffusion models (iteration3) | 1h 31min |\\n| Total | ~7h 41min |\\n\\nTo prevent overfitting and to improve the self-correction of the reward model, from the second iteration, we only fine-tuned the reward model using image pairs that include newly generated samples. For example, if the newly generated image is $x_0$, the reward model is fine-tuned on datasets such as $(x_0, x_{SDXL}), (x_0, x_{FLUX})$, while excluding repeated training on pairs like $(x_{SDXL}, x_{FLUX})$. This approach effectively prevents overfitting during multiple iterations while enabling the reward model to better learn composition-aware model preferences. Moreover, from the second iteration, the optimization of base diffusion models reduces the number of epochs by half, from 2 to 1. As a result, with the continuous refinement of iterative feedback learning, the training time of the model decreases progressively. Ultimately, the total training time is approximately 7 hours and 41 minutes.\\n\\n**Q8: The paper describes an iterative feedback mechanism that optimizes both reward models and the base model. However, examination of `feedback_train.py` reveals that only unet parameters (base model) are passed to the optimizer. This suggests that only the base model is being optimized, while reward models remain static. This difference requires clarification.**\", \"a8\": \"Thank you for your thorough review. We apologize for any confusion caused. In our codebase, the training of reward models and the base diffusion model **are conducted separately**. The training code for the reward models is located in `train/train.py`, while the training code for the base diffusion model can be found in `Iterative_feedback/feedback_train.py`.\\n\\nFor the iterative feedback learning process, we first expand the model gallery based on the previously optimized diffusion model using `data/iterative_expand_gallery.py`. Then, we enhance the reward models using the expanded model gallery in `train/train.py`. Finally, we fine-tune the base diffusion model using the improved reward model in `Iterative_feedback/feedback_train.py`. \\n\\nWe regret any misunderstanding this has caused and will update our code to consolidate the training of the reward model and the base diffusion model into a single file. This change will streamline our process, eliminate arbitrary filenames, and reduce code complexity. Thank you for your feedback!\"}", "{\"summary\": \"This study addresses improving the compositional text-to-image generation capability of less powerful diffusion models. The authors contribute in two main areas. First, they decompose the capabilities of various diffusion models into three domains\\u2014attribute binding, spatial relationships, and non-spatial relationships\\u2014and rank model outputs accordingly to guide reinforcement learning with reward models specific to each domain. Second, they introduce an iterative training process that utilizes the fine-tuned diffusion model outputs to progressively enhance the reward models.\\n\\nThrough multiple experiments, the study demonstrates the proposed method\\u2019s effectiveness, enabling early-stage models to achieve comparable generative abilities with reduced inference time. The authors also verify the effectiveness and general applicability of each design component across different models.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper is well-structured, making it accessible and easy for readers to follow.\\n2. Clear formulas are provided for each component, effectively removing any ambiguity.\\n3. Mathematical proofs substantiate the validity of the proposed method.\\n4. The authors conduct detailed, extensive experiments to support their approach.\\n5. Illustrative images are included, enhancing clarity and understanding.\", \"weaknesses\": \"1. About the number of texts used for attribute binding, \\\"500 prompts\\\" in Line 185 is inconsistent with \\\"1500\\\" in Table 1. Which is correct?\\n2. Although the experiments are detailed, some comparisons appear incomplete. The reinforcement learning from human feedback (RLHF) approach leverages outputs from advanced models like FLUX and SD3 for training, yet direct comparisons with these models are not provided. Including these comparisons would better highlight the method's effectiveness.\\n3. An additional experiment focusing on the first-round fine-tuned model\\u2014trained solely with human-annotated data\\u2014would be valuable. This would clarify the necessity and impact of the iterative training approach.\", \"questions\": \"In addition to the weakness mentioned above, I have a question regarding stability.\\n\\nBased on my experience, RLHF in diffusion models can often be unstable. I\\u2019m curious whether your method consistently produces stable results or if there\\u2019s a risk of occasionally poorer outcomes. I\\u2019m concerned that the iterative training process might lead to overfitting on biases present in the reward models, potentially reducing overall robustness.\\n\\nI hope the authors can make up for the weaknesses mentioned and address these questions.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to authors' response\", \"comment\": \"Thank you for your comprehensive response and the additional experimental results.\\n\\nThe new experiments effectively demonstrate IterComp's superior performance compared to baselines like Diffusion-DPO, while also highlighting how each reward type makes its own valuable contribution. Based on your thorough answers and clarifications, I am convinced of the paper's value and am raising my score to 8. \\n\\nI appreciate this fruitful discussion.\"}", "{\"metareview\": \"All reviewers agree to accept the paper. Reviewers appreciate the novel framework and significant performance improvement. Please be sure to address the reviewers' comments in the final version.\", \"additional_comments_on_reviewer_discussion\": \"All reviewers agree to accept the paper.\"}", "{\"summary\": \"The paper presents a new dataset for compositional generation where experts evaluate the outputs of multiple pre-trained models. The dataset consists of three aspects of compositionality: attribute binding, spatial relationships, and non-spatial relationships, along with 52.5K image-rank pairs, which can be used for training feedback, ranking, or reward models.\\n\\nThe second contribution of the paper is using the collected dataset to train reward models for each of the three key aspects. A multimodal model (BLIP) is used as a feature extractor for both the prompts and the generated images, and the extracted features are projected by MLPs to output the reward. The goal is to predict how good the given image-prompt pairs are by training the model similar to contrastive learning (moving toward winning examples and away from losing examples).\\n\\nThe third contribution is improving the compositional ability of a base model (SDXL is selected but it should not be limited to that) by optimizing it using the trained reward models. The base model is trained to maximize the reward model's output so that its outputs are better aligned with the reward models, which are trying to enforce compositionality. Furthermore, the paper proposes an iterative update mechanism for both reward models and the base model. Reward models are updated to predict the rankings generated by experts while the base model is updated to maximize the outputs of reward models. Through this process, both the base model and reward models are improved for their specific tasks.\\n\\n**Important note:** This review's technical content and analysis are my original work. Large Language Models were used solely to improve grammar and writing style, without altering any meanings or substantive feedback.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The collected dataset can lead to better models and new research directions. Furthermore, researchers can follow a similar approach to collect their own data. The data can also be used for other RLHF methods, such as diffusion-DPO.\", \"The iterative feedback mechanism seems to be a novel way to optimize both reward models and the base model.\", \"The code is shared, which allows reviewers to follow the details of the proposed method.\", \"The performance gain from IterComp appears significant, as evaluated through user studies, quantitative analysis, and qualitative assessment.\"], \"weaknesses\": [\"Table 5 should be in the main part instead of the Appendix, as it simply demonstrates that the proposed method outperforms previous methods.\", \"Several experiments are missing:\", \"The paper combines all reward models simultaneously, likely leading to improved compositional performance. However, reviewers would benefit from seeing the individual effect of each reward model.\", \"While SDXL is chosen as the base model, testing other models would help reviewers understand how reward models affect different base architectures.\", \"It would be valuable to examine Diffusion-DPO's performance when trained on the collected dataset. Currently, Diffusion-DPO is trained on the pick-a-pic dataset, which is larger but lacks compositional details. These results would be necessary to evaluate the proposed method's effectiveness using consistent standards.\", \"Expert ranking may inadvertently include aesthetic information [This observation is meant to prompt discussion rather than highlight a weakness, as the underlying cause remains unclear]:\", \"When IterComp is applied, the aesthetic score improves, suggesting that reward models can interpret image aesthetics, since reward maximization leads to better aesthetic scores. This could be because either expert rankings are influenced by image aesthetics (beyond compositional attributes) or because models with better composition naturally generate more aesthetic images (for instance, FLUX, being the best compositional model, likely produces more aesthetically pleasing outputs).\", \"Some experimental conditions may be misleading:\", \"The paper uses 40 inference steps for all models to ensure fairness. However, some models can generate samples with fewer steps; for example, FLUX-dev uses 28 steps by default.\"], \"questions\": [\"Training time considerations:\", \"Diffusion-DPO operates in the latent space without requiring image decoding. In contrast, IterComp requires image decoding for the reward models (though this could be avoided by training the reward model with latent space inputs), likely resulting in slower training. Additional commentary on the training scheme would be valuable.\", \"Potential code and paper discrepancy:\", \"The paper describes an iterative feedback mechanism that optimizes both reward models and the base model. However, examination of `feedback_train.py` reveals that only unet parameters (base model) are passed to the optimizer. This suggests that only the base model is being optimized, while reward models remain static. This difference requires clarification.\", \"Question regarding test-time adaptation:\", \"Could the iterative feedback mechanism be applied as test-time adaptation of the base model? Similar to Slot-TTA [1], the base model could be optimized using reward models to improve compositional quality. The process would work as follows: for a given prompt, the base model generates an image, which is then evaluated by reward models. The base model's parameters would be updated to maximize these rewards. This process could be repeated for several iterations. This approach would eliminate the need for training the base model, allowing it to adapt to any prompt at test time through multiple iterations. Comments on this possibility would be valuable.\", \"[1] Test-time Adaptation with Slot-Centric Models, Prabhudesai et al., ICML 2023\"], \"edit\": \"After the rebuttal and discussion with the authors, I raise my score to 8 from 6.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper propose a framework that aggregates composition-aware model preferences from multiple models and employs an iterative feedback learning approach to enhance T2I compositionality and general performance. The qualitative and quantitative results show their SOTA compositional generation capabilities compared to previous works.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. This paper presents a novel framework combining preferences from multiple diffusion models to enhance compositional text-to-image generation and address the relationship understanding in diffusion models.\\n2. The qualitative/quantitative results show comparable improvements in compositionality.\\n3. A composition-aware dataset is collected which provide diverse preferences that inform the reward models. (will it be released in the future?)\", \"weaknesses\": \"1. There is limited discussion of the computation resources required to manage multiple reward models, which may affect the scalability in large-scale applications. Although the authors claim that their model has fast inference speed, the cost of model training and data collection is not clear. This makes me feel less likely than DPO to be widely used in practice.\\n2. The user study only demonstrates the user preferences lacking the deep analysis of attribute binding and object relationship, which are critical to model performance. 16 samples is also too small to evaluate such a complex task.\", \"questions\": \"1. As mentioned in W.1, how long does the training loop take (including the iterative feedback learning)?\\n2. Could I use this method to improve a specific concept generation (e.g., a human-object interaction)? How much time does it take from collecting synthetic data to finalizing the model training?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer bbqi (Part 1/2)\", \"comment\": \"*We sincerely thank you for your time and efforts in reviewing our paper, and your valuable feekback. We are glad to see that our paper is well-structured and easy to follow, the theoretical details and proof are complete, and the experimental results are extensive and promising. Please see below for our responses to your comments.*\\n\\n**Q1: About the number of texts used for attribute binding, \\\"500 prompts\\\" in Line 185 is inconsistent with \\\"1500\\\" in Table 1. Which is correct?**\", \"a1\": \"We apologize for the confusion. Attribute binding consists of three aspects: color, shape, and texture. For each aspect, we collect 500 representative prompts, totaling 500*3=1500 prompts for attribute binding. We have revised the problematic expression and **added explanations in the updated pape in line187**. We apologize again for any misunderstandings caused.\\n\\n\\n**Q2: Although the experiments are detailed, some comparisons appear incomplete. The reinforcement learning from human feedback (RLHF) approach leverages outputs from advanced models like FLUX and SD3 for training, yet direct comparisons with these models are not provided. Including these comparisons would better highlight the method's effectiveness.**\", \"a2\": \"Thank you for your suggestion. We've added a comparison of IterComp with FLUX and SD3 regarding compositional aspects as follows:\\n\\n| Model | Color$\\\\uparrow$ | Shape$\\\\uparrow$ | Texture$\\\\uparrow$ | Spatial$\\\\uparrow$ | Non-Spatial$\\\\uparrow$ | Complex$\\\\uparrow$ |\\n| ---------- | --------------- | --------------- | ----------------- | ----------------- | --------------------- | ----------------- |\\n| SDXL | 0.6369 | 0.5408 | 0.5637 | 0.2032 | 0.3110 | 0.4091 |\\n| SD3-medium | 0.7442 | 0.6053 | 0.7055 | 0.2419 | 0.3294 | 0.4548 |\\n| FLUX-dev | 0.7881 | **0.6529** | 0.7472 | 0.2606 | **0.3396** | 0.4731 |\\n| IterComp | **0.7982** | 0.6217 | **0.7683** | **0.3196** | 0.3371 | **0.4873** |\", \"the_comparison_of_itercomp_with_flux_and_sd3_regarding_generation_quality_and_speed_as_follows\": \"| Model | CLlP Score$\\\\uparrow$ | Aesthetic Score$\\\\uparrow$ | ImageReward$\\\\uparrow$ | Inference Time$\\\\downarrow$ |\\n| ---------- | -------------------- | ------------------------- | --------------------- | -------------------------- |\\n| SDXL | 0.322 | 5.531 | 0.780 | **5.63 s/Img** |\\n| SD3-medium | 0.332 | 5.874 | 0.942 | 13.44 s/Img |\\n| FLUX-dev | **0.339** | 5.901 | 1.082 | 23.02 s/Img |\\n| IterComp | 0.337 | **5.936** | **1.437** | **5.63 s/Img** |\\n\\nFrom the two tables above, it is clear that IterComp outperforms SD3 in both compositionality and generation quality, with nearly **three times increase in inference speed**. IterComp is on par with FLUX-dev but has a significant advantage in spatial relationships due to our method's iterative enhancement of spatial awareness through a dedicated reward model. **Despite FLUX-dev having 12 billion parameters, our model achieves comparable or superior generation performance with only 1/5 of the parameters and nearly four times faster inference speed.**\\n\\nIt is worth noting that the original IterComp in our paper is optimized based on SDXL, and our main focus is to illustrate how this new optimization framework improves existing diffusion models. Thus it is unfair to make a direct comparison with models like SD3 and FLUX. IterComp is a versatile text-to-image or diffusion alignment framework that can be adapted to any model include FLUX. In the future, we plan to train IterComp based on FLUX, which is expected to demonstrate even more powerful generative capabilities.\"}", "{\"title\": \"Response to Reviewer 52Cf (Part 2/2)\", \"comment\": \"**Q2: It is necessary to test the results of FLUX-dev directly on the t2i compbench to see how much improvement the method proposed in this paper has. I currently suspect that the improvement may not be very significant.**\", \"a2\": \"Thank you for your suggestion! We have provided the T2I-CompBench results for FLUX and SD3 from the model gallery as follows:\\n\\n| Model | Color$\\\\uparrow$ | Shape$\\\\uparrow$ | Texture$\\\\uparrow$ | Spatial$\\\\uparrow$ | Non-Spatial$\\\\uparrow$ | Complex$\\\\uparrow$ | Inference Time$\\\\downarrow$ |\\n| -------------- | --------------- | --------------- | ----------------- | ----------------- | --------------------- | ----------------- | -------------------------- |\\n| SDXL | 0.6369 | 0.5408 | 0.5637 | 0.2032 | 0.3110 | 0.4091 | **5.63 s/Img** |\\n| SD3-medium | 0.7442 | 0.6053 | 0.7055 | 0.2419 | 0.3294 | 0.4548 | 13.44 s/Img |\\n| FLUX-dev | 0.7881 | 0.6529 | 0.7472 | 0.2606 | 0.3396 | 0.4731 | 23.02 s/Img |\\n| IterComp(SDXL) | **0.7982** | 0.6217 | **0.7683** | **0.3196** | 0.3371 | **0.4873** | **5.63 s/Img** |\\n| IterComp(SD3) | **0.8532** | **0.6922** | **0.8493** | **0.4074** | **0.3482** | **0.5419** | 13.44 s/Img |\\n\\n\\n\\nAs shown in the table, IterComp demonstrates a significant advantage over SD3 and FLUX in terms of spatial-relationship, and offers faster generation speeds. This is because SD3 and FLUX have limited spatial-awareness, whereas our approach iteratively enhances the model\\u2019s spatial-awareness through a reward function focused on spatial relationships. It is worth noting that the original IterComp in our paper is optimized based on SDXL, and our main focus is to illustrate how this new optimization framework improves existing diffusion models.\\nThus it is unfair to make a direct comparison with models like SD3 and FLUX. Besides, according to this table, our IterComp can significantly improve the performance of SD3 and outperform previous models (including FLUX) from all aspects of compositional generation, demonstrating the effectiveness of our IterComp framework.\"}", "{\"comment\": \"Thank you for your detailed explanation and comprehensive experiments. I think that IterComp is an excellent work that fully leverages the capabilities of existing models through iterative training, enabling the development of a better-performing model with shorter inference time. I truly appreciate the method you proposed and the solid experimental results, and have raised my score to 8.\\n\\nOnce again, thank you for your contributions to the research on generative models and for this insightful discussion.\"}", "{\"title\": \"Response to Reviewer ocHL (Part 4/4)\", \"comment\": \"**Q9: Could the iterative feedback mechanism be applied as test-time adaptation of the base model? Similar to Slot-TTA [1], the base model could be optimized using reward models to improve compositional quality. The process would work as follows: for a given prompt, the base model generates an image, which is then evaluated by reward models. The base model's parameters would be updated to maximize these rewards. This process could be repeated for several iterations. This approach would eliminate the need for training the base model, allowing it to adapt to any prompt at test time through multiple iterations. Comments on this possibility would be valuable.**\", \"a9\": \"Thank you for your insightful question. Test-time adaptation is indeed a promising area for advancing diffusion models. Regarding the application of our IterComp framework to this process:\\n\\n* Iterative Enhancement: In IterComp, images generated by the optimized base model are incorporated back into the reward dataset for ongoing refinement of the reward models. While this iterative process does introduce additional inference time, its potential to significantly enhance generative quality without the need for retraining the base model makes it a worthwhile endeavor in scenarios where inference time is less critical.\\n\\n* Limitations in Complex Scenarios: It's important to note that relying solely on reward-guided optimization may not suffice in more complex generative tasks, such as chip design, where robust initial models are crucial due to a lack of sufficient training data. In our evaluations within the text-to-image domain, powerful underlying models (e.g., SDXL) ensure a baseline level of quality. If the foundational model lacks basic generation strength, test-time adaptation alone may not yield substantial improvements.\\n\\nThese considerations highlight the potential and limitations of applying IterComp as a test-time adaptation tool, depending on the specific requirements and constraints of the application scenario.\"}", "{\"title\": \"Thank you for your support\", \"comment\": \"Dear Reviewer bbqi\\uff0c\\n\\nThank you for raising score! We greatly appreciate your recognition of our work and the valuable feedback you provided. We will continue to optimize this method and strive to make more contributions to the field.\\n\\nWarm Regards,\\n\\nThe Authors\"}", "{\"title\": \"Review response\", \"comment\": \"Hi authors,\\n\\nThe rebuttal solves my concerns. I decide to increase my score to 6.\"}", "{\"title\": \"Response to Reviewer ocHL (Part 1/4)\", \"comment\": \"*We sincerely thank you for your time and efforts in reviewing our paper, and your valuable feekback. We are glad to see that our method is novel and will lead to new research directions, we provide detailed code to foster community progress, and the experimental results are extensive and promising. Please see below for our responses to your comments.*\\n\\n**Q1: Table 5 should be in the main part instead of the Appendix, as it simply demonstrates that the proposed method outperforms previous methods.**\", \"a1\": \"Thank you for your suggestion! **In the updated paper**, we have included Table 5 in the main part of the paper.\\n\\n**Q2: The paper combines all reward models simultaneously, likely leading to improved compositional performance. However, reviewers would benefit from seeing the individual effect of each reward model.**\", \"a2\": \"It's a good suggestion to test the individual effect of each reward model. We used the same training strategy as IterComp but trained with only one reward model at a time. The final results are shown below:\\n\\n| Model| Color$\\\\uparrow$ | Shape$\\\\uparrow$ | Texture$\\\\uparrow$ | Spatial$\\\\uparrow$ | Non-Spatial$\\\\uparrow$ | Complex$\\\\uparrow$ |\\n| --------------------- | --------------- | --------------- | ----------------- | ----------------- | --------------------- | ----------------- |\\n| SDXL| 0.6369| 0.5408| 0.5637| 0.2032| 0.3110 |0.4091|\\n| IterComp(Attribute)| 0.7742 | 0.6079 | 0.7572| 0.2181 | 0.3129| 0.4429|\\n| IterComp(Spatial)| 0.6521| 0.5477 | 0.5881| 0.3008 | 0.3201 |0.4534|\\n| IterComp(Non-spatial) | 0.6463| 0.5426 | 0.5724| 0.2265 | 0.3354| 0.4367 |\\n| IterComp(All)| **0.7982** | **0.6217** | **0.7683** | **0.3196** | **0.3371**| **0.4873** |\\n\\nAs shown in the table, training with a single reward model allows the final diffusion model to achieve significant improvements in the specific preference targeted by that reward. Since different preferences are not entirely mutually exclusive, a single reward can also lead to minor enhancements in other metrics. For example, the spatial-based reward model primarily strengthens spatial-awareness but also demonstrates some degree of non-spatial understanding. Take the scenario of \\u201ca person looking at the moon in the sky\\u201d as an example: the spatial-based reward model focuses on learning the spatial concept that \\u201cthe moon is above the person.\\u201d At the same time, due to variations in the capabilities of the models in the model gallery, it may also learn the non-spatial relationship of \\u201clooking.\\u201d Therefore, using only the spatial-based reward model for feedback learning can also improve the model's ability to understand non-spatial relationships. \\n\\nThis indicates that rewards are not entirely independent. For complex tasks such as compositional generation, leveraging multiple reward models in combination can result in more substantial overall improvements.\\n\\n**Q3: While SDXL is chosen as the base model, testing other models would help reviewers understand how reward models affect different base architectures.**\", \"a3\": \"Thank you for your valuable suggestions and feedback. We applied the same method to perform iterative optimization on both SD1.5 and SD2.1, and the final results are as follows:\\n\\n| Model | Color$\\\\uparrow$ | Shape$\\\\uparrow$ | Texture$\\\\uparrow$ | Spatial$\\\\uparrow$ | Non-Spatial$\\\\uparrow$ | Complex$\\\\uparrow$ |\\n| ---------------- | --------------- | --------------- | ----------------- | ----------------- | --------------------- | ----------------- |\\n| SD1.5 | 0.3821 | 0.3543 | 0.4196 | 0.1277 | 0.3072 | 0.3091 |\\n| Itercomp(SD 1.5) | **0.5223** | **0.3929** | **0.4890** | **0.1849** | **0.3117** | **0.3590** |\\n| SD2.1 | 0.5044 | 0.4315 | 0.4927 | 0.1365 | 0.3123 | 0.3438 |\\n| Itercomp(SD 2.1) | **0.6294** | **0.4933** | **0.6072** | **0.2399** | **0.3214** | **0.4177** |\\n| SDXL | 0.6369 | 0.5408 | 0.5637 | 0.2032 | 0.3110 | 0.4091 |\\n| IterComp(SDXL) | **0.7982** | **0.6217** | **0.7683** | **0.3196** | **0.3371** | **0.4873** |\\n\\n\\nWe found that applying IterComp to optimize all three base diffusion models significantly improved compositional generation. Notably, for spatial relationships, our method uses composition-aware reward models to enhance spatial awareness and iteratively improve understanding of positional relationships and quantities. In contrast, base models struggle to develop strong spatial capabilities relying solely on text guidance. Due to time and computational resource constraints, we focused on SD1.5 and SD2.1. Moving forward, we plan to continue refining IterComp and apply it to more advanced models such as FLUX.\"}", "{\"title\": \"Response to Reviewer HHze (Part 2/2)\", \"comment\": \"**Q4: As mentioned in W.1, how long does the training loop take (including the iterative feedback learning)?**\", \"a4\": \"Here is an approximate listing of the training time for each stage (all our experiments were conducted on 4*NVIDIA A100-80G GPUs):\\n\\n| Phrase | Training Time |\\n| ---------------------------------------- | ------------- |\\n| Train reward models (iteration1) | 31min |\\n| Train base diffusion models (iteration1) | 3h 37min |\\n| Train reward models (iteration2) | 14min |\\n| Train base diffusion models (iteration2) | 1h 31min |\\n| Train reward models (iteration3) | 17min |\\n| Train base diffusion models (iteration3) | 1h 31min |\\n| Total | ~7h 41min |\\n\\nTo prevent overfitting and to improve the self-correction of the reward model, from the second iteration, we only fine-tuned the reward model using image pairs that include newly generated samples. For example, if the newly generated image is $x_0$, the reward model is fine-tuned on datasets such as $(x_0, x_{SDXL}), (x_0, x_{FLUX})$, while excluding repeated training on pairs like $(x_{SDXL}, x_{FLUX})$. This approach effectively prevents overfitting during multiple iterations while enabling the reward model to better learn composition-aware model preferences. Moreover, from the second iteration, the optimization of base diffusion models reduces the number of epochs by half, from 2 to 1. As a result, with the continuous refinement of iterative feedback learning, the training time of the model decreases progressively. Ultimately, the total training time is approximately 7 hours and 41 minutes.\\n\\n\\n**Q5: Could I use this method to improve a specific concept generation (e.g., a human-object interaction)? How much time does it take from collecting synthetic data to finalizing the model training?**\", \"a5\": \"Our IterComp is a versatile optimization framework that can be effectively tailored to enhance specific concept generations. For instance, in the context of human-object interactions, we could employ powerful MLLMs or domain-specific models to generate multiple rewards from various perspectives, including the accuracy of human joint positions, object translation and rotation, and human-object contact. Once these reward models are in place, we can leverage an IterComp-like pipeline to iteratively refine the base generation model. As demonstrated in our research, the optimization process typically converges within three iterations, making it highly efficient in practice. Moreover, the amount of synthetic data required for optimization is minimal, approximately 1/1000 of the data needed for training the base model. Consequently, the overall time required is significantly less than that for training the original generation model.\"}", "{\"title\": \"Response to Reviewer Vtxw (Part 1/2)\", \"comment\": \"*We sincerely thank you for your time and efforts in reviewing our paper, and your valuable feekback. We are glad to see that the proposed method is novel, the theoretical details and proof are complete, and the experimental results are promising. Please see below for our responses to your comments.*\\n\\n**Q1: The qualitative comparison in Fig. 4 is confused. There seems to be a marginal improvement in Line 3 and Line 4. The authors should make the difference between the qualitative examples clear to be recognized.**\", \"a1\": \"We apologize for any inconvenience caused. The examples in the paper were not deliberately selected, and to demonstrate the superiority of our method, we conducted a large number of additional experiments **in the appendix A.7 of the updated manuscript**. Furthermore, to better showcase the advantages of IterComp over other algorithms, we have added evaluations against FLUX and SD3 on T2I-CompBench:\\n\\n| Model | Color$\\\\uparrow$ | Shape$\\\\uparrow$ | Texture$\\\\uparrow$ | Spatial$\\\\uparrow$ | Non-Spatial$\\\\uparrow$ | Complex$\\\\uparrow$ | Inference Time$\\\\downarrow$ |\\n| ---------- | --------------- | --------------- | ----------------- | ----------------- | --------------------- | ----------------- | -------------------------- |\\n| SDXL | 0.6369 | 0.5408 | 0.5637 | 0.2032 | 0.3110 | 0.4091 | **5.63 s/Img** |\\n| SD3-medium | 0.7442 | 0.6053 | 0.7055 | 0.2419 | 0.3294 | 0.4548 | 13.44 s/Img |\\n| FLUX-dev | 0.7881 | **0.6529** | 0.7472 | 0.2606 | **0.3396** | 0.4731 | 23.02 s/Img |\\n| IterComp | **0.7982** | 0.6217 | **0.7683** | **0.3196** | 0.3371 | **0.4873** | **5.63 s/Img** |\\n\\nAs shown in the table, IterComp demonstrates strong superiority in compositional generation, outperforming SD3. Despite FLUX\\u2019s massive 12B parameters, **IterComp achieves similar or even better performance with only 1/5 of the parameters and generates images nearly 4 times faster**, with some metrics comparative to FLUX, while the majority demonstrate stronger results. IterComp is a highly generalizable text-to-image or diffusion alignment framework, and in the future, we plan to adapt IterComp to FLUX, which will further showcase even greater generative capabilities.\\n\\n**Q2: The qualitative examples are not enough to evaluate the model performance since the cases in the paper are somehow complex. The authors can provide more examples for a single case.**\", \"a2\": \"We thank the reviewer for helping us improve our paper. We have added more experiments **in the appendix A.7 of the updated manuscript**, including comparisons with additional simple examples. Please refer to the updated manuscript for details.\\n\\n**Q3: There is a need to evaluate the stability of the proposed model.**\", \"a3\": \"Thank you for your suggestions! We conducted experiments on the method's stability, **detailed in the appendix A.3 of the updated manuscript**. We selected five methods for comparison: SD1.5, SDXL, InstanceDiffusion, Diffusion-DPO, and FLUX, along with two evaluation metrics: *complex* and *CLIP-score*. Using the same 50 seeds, we calculated the mean and variance of the models\\u2019 performance for these metrics. To facilitate visualization, we used the variance of each method as the radius and scaled it uniformly by a common factor (10^4) for stability analysis.\\nRegarding the stability of compositionality, **as shown in Figure 8(a)**, we found that IterComp not only achieved the best overall performance but also demonstrated superior stability. This can be attributed to the iterative feedback learning paradigm enable the model to **analyze and refine its output at each optimization step, effectively self-correcting and self-improving**. The iterative training approach enables the model to perform feedback training based on its own generated samples rather than solely relying on external data, this enables the model to steadily improve over multiple iterations based on its own foundation, which significantly enhances model stability.\\n\\nFor the stability of realism or generation quality, **as shown in Figure 8(b)**, our method also exhibited the highest stability. Therefore, the iterative training approach not only improves the model's performance but also substantially enhances its stability across different dimensions.\"}", "{\"title\": \"Response to Reviewer Vtxw (Part 2/2)\", \"comment\": \"**Q4: The comparison with InstanceDiffusion is confusing. As a layout-guided method, InstanceDiffusion needs detailed layout inputs. It is not fair to provide only one box if the case includes two or more instances, as indicated in the third line of Fig. 4. As the authors attempt to compare with layout-to-image methods, a SOTA method named MIGC [1] is also not included.**\", \"a4\": \"We are grateful for your feedback and apologize for any potential confusion caused. For the layout-to-image method, we used GPT-4o to infer the corresponding layout for each example in the paper. Since layout-based models are not the focus of our work, we chose not to include these layouts in the visualizations. However, it's important to note that **all examples of layout-based models were generated with precise layouts**, as mentioned in line 408 of the original paper. We apologize for any misunderstandings and have **clarified this in the updated manuscript**.\\n\\nAdditionally, in the **appendix A.6 of the updated manuscript**, we provide additional experiments between IterComp, InstanceDiffusion, and MIGC. These examples clearly show that while MIGC and InstanceDiffusion can accurately generate objects in the specified positions of the layout, there is a notable gap in generation quality compared to IterComp, such as aesthetics and details. Moreover, the images generated by these two methods often appear visually unrealistic, with significant flaws such as incomplete violins or mismatches between bicycle and its basket. This highlights the clear superiority of our method. Additionally, we have included an evaluation of MIGC on T2I-CompBench:\\n\\n| Model | Color$\\\\uparrow$ | Shape$\\\\uparrow$ | Texture$\\\\uparrow$ | Spatial$\\\\uparrow$ | Non-Spatial$\\\\uparrow$ | Complex$\\\\uparrow$ | Inference Time$\\\\downarrow$ |\\n| ----------------- | --------------- | --------------- | ----------------- | ----------------- | --------------------- | ----------------- | -------------------------- |\\n| InstanceDiffusion | 0.5433 | 0.4472 | 0.5293 | 0.2791 | 0.2947 | 0.3602 | 9.88 s/Img |\\n| MIGC | 0.5914 | 0.4503 | 0.5162 | 0.2811 | 0.2908 | 0.3729 | 11.47 s/Img |\\n| IterComp | **0.7982** | **0.6217** | **0.7683** | **0.3196** | **0.3371** | **0.4873** | **5.63 s/Img** |\\n\\nAs shown in the table, IterComp outperforms across all metrics and achieves nearly double the generation speed. Thank you once again for your valuable suggestions!\"}", "{\"title\": \"Thank you for your support\", \"comment\": \"Dear Reviewer ocHL\\uff0c\\n\\nMany thanks for raising score! We sincerely appreciate your valuable comments and your precious time in reviewing our paper!\\n\\nWarm Regards,\\n\\nThe Authors\"}", "{\"summary\": \"This paper proposes IterComp, an iterative composition-aware reward-controlled framework. It introduces a model gallery and constructs a high-quality composition-aware model preference dataset. Utilizing a new iterative feedback learning framework, IterComp progressively enhances both the reward models and the base diffusion model.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. This work claims to be the first work to introduce a reward-controlled framework in the concept composition generation, which is somehow novel in this field.\\n2. This work is presented well with complete theoretical details and proof.\\n3. The quantitative experimental results show better performance compared to SOTAs.\", \"weaknesses\": \"1. The qualitative comparison in Fig. 4 is confused. There seems to be a marginal improvement in Line 3 and Line 4. The authors should make the difference between the qualitative examples clear to be recognized.\\n2. The qualitative examples are not enough to evaluate the model performance since the cases in the paper are somehow complex. The authors can provide more examples for a single case. Also, there is a need to evaluate the stability of the proposed model.\\n3. The comparison with InstanceDiffusion is confusing. As a layout-guided method, InstanceDiffusion needs detailed layout inputs. It is not fair to provide only one box if the case includes two or more instances, as indicated in the third line of Fig. 4. As the authors attempt to compare with layout-to-image methods, a SOTA method named MIGC [1] is also not included.\\n\\n[1] Zhou, Dewei, et al. \\\"Migc: Multi-instance generation controller for text-to-image synthesis.\\\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024.\", \"questions\": \"Please see the weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer HHze (Part 1/2)\", \"comment\": \"*We sincerely thank you for your time and efforts in reviewing our paper, and your valuable feekback. We are glad to see that the proposed method is novel, the experimental results are promising, and we contribute a high-quality composition-aware dataset. Please see below for our responses to your comments.*\\n\\n**Q1: Will the composition-aware dataset be released in the future?**\", \"a1\": \"We will fully open-source the composition-aware model preference dataset and the three pretrained reward models upon acceptance of the paper.\\n\\n**Q2: There is limited discussion of the computation resources required to manage multiple reward models, which may affect the scalability in large-scale applications. Although the authors claim that their model has fast inference speed, the cost of model training and data collection is not clear. This makes me feel less likely than DPO to be widely used in practice.**\", \"a2\": \"This is an excellent question. The focus of our method lies in the concepts of the **model gallery** and the **iterative feedback learning** paradigm, both of which are highly generalizable. IterComp can be applied to a wide range of tasks, with the number and design of reward models remaining flexible and customizable, as they are not the core focus of our approach. Given the inherent challenges of compositional generation, we employed three reward functions to enhance model performance, but this component is adjustable.\\n\\nWhile our method can be tailored to tackle complex tasks, whereas DPO is limited to handling simpler tasks. Without explicit reward models, DPO is unable to enable the base model to effectively learn from multiple preferences. Furthermore, due to the lack of reward models in DPO, it is not possible to automatically rank the generated images from the optimized base model or the newly added models, making it incompatible with the iterative feedback learning paradigm. We applied DPO to SDXL using the same dataset of IterComp (i.e., 52,500 image pairs), and the results on T2I-CompBench are as follows:\\n\\n| Model | Color$\\\\uparrow$ | Shape$\\\\uparrow$ | Texture$\\\\uparrow$ | Spatial$\\\\uparrow$ | Non-Spatial$\\\\uparrow$ | Complex$\\\\uparrow$ |\\n| ------------- | --------------- | --------------- | ----------------- | ----------------- | --------------------- | ----------------- |\\n| SDXL | 0.6369 | 0.5408 | 0.5637 | 0.2032 | 0.3110 | 0.4091 |\\n| Diffusion-DPO | 0.6822 | 0.5691 | 0.6224 | 0.2216 | 0.3291 | 0.4263 |\\n| IterComp | **0.7982** | **0.6217** | **0.7683** | **0.3196** | **0.3371** | **0.4873** |\\n\\nAs shown in the table, Diffusion-DPO (i.e., the model optimized using DPO on the same training set as IterComp) shows a significant gap compared to IterComp, with limited improvements in aspects such as shape and spatial relationships. This is because DPO's training method is more suited for simpler generation tasks, and the lack of an explicit reward model restricts the model's ability to learn composition-aware model preferences. Additionally, it cannot improve the model's capabilities through iterative training.\\n\\n**Q3: The user study only demonstrates the user preferences lacking the deep analysis of attribute binding and object relationship, which are critical to model performance. 16 samples is also too small to evaluate such a complex task.**\", \"a3\": \"Thank you for your suggestion. We conducted a larger and more comprehensive user study, **with the results included in the appendix A.4 of the updated paper**. The study involved 41 randomly selected participants from diverse backgrounds. We compared IterComp with five other methods across four aspects: attribute binding, spatial relationships, non-spatial relationships and overall performance. Each comparison involved 25 prompts, culminating in a final survey of 125 prompts and generating 20,500 votes. From the win rate distribution of IterComp shown in the figure, it is evident that IterComp demonstrates significant advantages across all three aspects of compositional generation.\\n\\nSpecifically, compared to the layout-based model InstanceDiffusion, IterComp shows an absolute advantage in attribute binding. For text-based models such as SDXL and FLUX, IterComp leads significantly in spatial relationships. This highlights that the model gallery design effectively collects composition-aware model preferences and enhances performance across different compositional aspects through iterative feedback learning.\"}", "{\"title\": \"Response to Reviewer 52Cf (Part 1/2)\", \"comment\": \"*We sincerely thank you for your time and efforts in reviewing our paper, and your valuable feekback. We are glad to see that the proposed method is novel and practical, the experimental results are strong empirical, the theoretical analysis is detailed and our method is well-designed and easy to follow. Please see below for our responses to your comments.*\\n\\n**Q1: The paper mentioned that RPG is challenging to achieve precise generation, but Tab2 did not compare with RPG, and I checked that RPG's performance on T2I-Compbench is better than that of the paper.**\", \"a1\": \"Thank you for your comment! IterComp is a general framework for text-to-image generation or diffusion alignment, rather than an improvement for a specific model. It is worth noting that RPG uses MLLM to handle generation tasks with complex prompts.\\nSuch LLM-Enhanced diffusion models naturally perform better than pure diffusion models (e.g., SDXL and our IterComp) on complex generation. To make fair comparison, we replace the diffusion backbone of RPG with our IterComp and provide its metrics on T2I-CompBench:\\n\\n| Model | Color$\\\\uparrow$ | Shape$\\\\uparrow$ | Texture$\\\\uparrow$ | Spatial$\\\\uparrow$ | Non-Spatial$\\\\uparrow$ | Complex$\\\\uparrow$ | Inference Time$\\\\downarrow$ |\\n| ------------ | --------------- | --------------- | ----------------- | ----------------- | --------------------- | ----------------- | -------------------------- |\\n| IterComp | 0.7982 | 0.6217 | 0.7683 | 0.3196 | 0.3371 | 0.4873 | **5.63 s/Img** |\\n| RPG | 0.8335 | 0.6801 | 0.8129 | 0.4547 | 0.3462 | 0.5408 | 15.57 s/Img |\\n| RPG+IterComp | **0.8668** | **0.7216** | **0.8201** | **0.4874** | **0.3498** | **0.5661** | 15.57 s/Img |\\n\\nThe table shows that when the RPG backbone is replaced with IterComp, the model significantly outperforms across all six metrics on the T2I-CompBench. This highlights IterComp's superiority in compositional generation. It's important to note that IterComp is a simple SDXL-like model that doesn't require complex computations during inference. As a result, under the same conditions such as prompts and inference steps, IterComp is nearly **three times faster than RPG**.\\n\\nIn addition, to more comprehensively evaluate the capabilities of IterComp and RPG in compositional generation, we employed two additional, up-to-date benchmarks for testing:\", \"dpg_bench\": \"| Model | Global | Entity | Attribute | Relation | Other | Average |\\n| ------------ | --------- | --------- | --------- | --------- | --------- | --------- |\\n| IterComp | 89.91 | 88.64 | 86.73 | 84.77 | 89.74 | 81.17 |\\n| RPG | 91.01 | 87.39 | 84.53 | 87.92 | 89.84 | 81.28 |\\n| RPG+IterComp | **92.74** | **91.33** | **89.10** | **92.38** | **90.13** | **84.72** |\", \"geneval\": \"| Model | Single Obj. | Two Obj. | Counting | Colors | Position | Color Attri. | Overall |\\n| ------------ | ----------- | -------- | -------- | -------- | -------- | ------------ | --------- |\\n| IterComp | 0.97 | 0.85 | 0.63 | 0.86 | 0.33 | 0.41 | 0.675 |\\n| RPG | 0.97 | 0.86 | 0.66 | 0.79 | 0.30 | 0.38 | 0.660 |\\n| RPG+IterComp | **0.99** | **0.90** | **0.72** | **0.90** | **0.35** | **0.48** | **0.723** |\\n\\nAs demonstrated in the two benchmarks above, IterComp outperforms RPG in metrics like attributes and colors. This is due to our training of a specific reward model for attribute binding, which iteratively enhances IterComp over multiple iterations. Leveraging the strong planning and reasoning capabilities of LLMs, RPG excels in areas such as relations, counting, and positioning. When IterComp is used as the backbone for RPG, the model exhibits remarkable performance across all aspects. We have **included this experiment in the appendix A.5 of the updated manuscript**. Thank you again for your feedback.\"}", "{\"title\": \"Thank you for your support\", \"comment\": \"Dear Reviewer 52Cf\\uff0c\\n\\nThank you for raising score! We sincerely appreciate your valuable comments and your precious time in reviewing our paper!\\n\\nWarm Regards,\\n\\nThe Authors\"}", "{\"title\": \"Global Response\", \"comment\": \"We sincerely thank all the reviewers for their thorough reviews and valuable feedback. We are glad to hear that our proposed framework is novel and practical (reviewer Vtxw, 52Cf, HHze and ocHL), the theoretical details and proof is clear and complete (reviewer Vtxw, 52Cf and bbqi), and the performance improvements demonstrated in experiments are significant and promising (all reviewers).\\n\\nWe summarize our responses to the reviewers' comments as follows:\\n\\n- We additionally provide more examples and conduct more experiments to show the significant improvement of our IterComp and **updated our manuscript in Appendix. A.5, A.6, and A.7**. (Reviewer Vtxw, 52Cf, bbqi and ocHL)\\n- We additionally conduct experiments on model stability and **updated our manuscript in Appendix. A3** (Reviewer Vtxw and bbqi)\\n- We additionally conduct an analysis of the training time and application of IterComp (Reviewer HHze, bbqi and ocHL)\\n\\nWe reply to each reviewer's questions in detail below their reviews. Please kindly check out them. Thank you and please feel free to ask any further questions.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"summary\": \"This paper introduces IterComp, a novel framework that enhances compositional text-to-image generation by aggregating preferences from multiple diffusion models through iterative feedback learning. The approach demonstrates superior performance in both compositional accuracy and image quality, while maintaining efficient inference speed. The main strength lies in its ability to combine different models' advantages without adding computational overhead, though the long-term stability of the iterative learning process could be further explored.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"Novel and practical approach: The paper presents a simple yet effective way to combine multiple models' strengths for compositional generation without increasing computational complexity.\", \"strong_empirical_results\": \"The method shows clear improvements over existing approaches, with comprehensive evaluations on both compositional accuracy and image quality metrics.\", \"well_structured_technical_contribution\": \"The paper provides clear theoretical analysis with detailed proofs, and the iterative feedback learning framework is well-designed and easy to implement.\", \"weaknesses\": \"The paper mentioned that RPG is challenging to achieve precise generation, but Tab2 did not compare with RPG, and I checked that RPG's performance on T2I-Compbench is better than that of the paper.\\n\\nIt is necessary to test the results of FLUX-dev directly on the t2i compbench to see how much improvement the method proposed in this paper has. I currently suspect that the improvement may not be very significant.\", \"questions\": \"See weakness\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}" ] }
4vzGQcVUG8
Provable weak-to-strong generalization via benign overfitting
[ "David Xing Wu", "Anant Sahai" ]
The classic teacher-student model in machine learning posits that a strong teacher supervises a weak student to improve the student's capabilities. We instead consider the inverted situation, where a weak teacher supervises a strong student with imperfect pseudolabels. This paradigm was recently brought forth by \citet{burns2023weak} and termed \emph{weak-to-strong generalization}. We theoretically investigate weak-to-strong generalization for binary and multilabel classification in a stylized overparameterized spiked covariance model with Gaussian covariates where the weak teacher's pseudolabels are asymptotically like random guessing. Under these assumptions, we provably identify two asymptotic phases of the strong student's generalization after weak supervision: (1) successful generalization and (2) random guessing. Our techniques should eventually extend to weak-to-strong multiclass classification. Towards doing so, we prove a tight lower tail inequality for the maximum of correlated Gaussians, which may be of independent interest. Understanding the multilabel setting reinforces the value of using logits for weak supervision when they are available.
[ "benign overfitting", "spiked covariance models", "overparameterized models", "interpolation", "pseudolabeling", "weak-to-strong generalization", "alignment" ]
Accept (Poster)
https://openreview.net/pdf?id=4vzGQcVUG8
https://openreview.net/forum?id=4vzGQcVUG8
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yMmRjPYOxZ", "rnF1hxantF", "kCDVYKXpjH", "iqIoEIcZq4", "gP7sZzf17h", "aQqkyAi7AH", "YtFuGUpDV3", "XIi4Yj7QSx", "SdrMoGrHMh", "NwfGPc5ftH", "C6gIEe28pC", "AvF27rZgv3", "6nMME01sow" ], "note_type": [ "official_review", "official_comment", "official_review", "official_comment", "official_comment", "meta_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "decision" ], "note_created": [ 1730698846541, 1731970624711, 1730526598576, 1731970843394, 1731970931929, 1735096418562, 1733164995011, 1730410797547, 1732145621759, 1732225589464, 1731970759773, 1730607578228, 1737523904628 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission8378/Reviewer_Ns5A" ], [ "ICLR.cc/2025/Conference/Submission8378/Authors" ], [ "ICLR.cc/2025/Conference/Submission8378/Reviewer_P4ZD" ], [ "ICLR.cc/2025/Conference/Submission8378/Authors" ], [ "ICLR.cc/2025/Conference/Submission8378/Authors" ], [ "ICLR.cc/2025/Conference/Submission8378/Area_Chair_d7uG" ], [ "ICLR.cc/2025/Conference/Submission8378/Reviewer_vRgg" ], [ "ICLR.cc/2025/Conference/Submission8378/Reviewer_ww6B" ], [ "ICLR.cc/2025/Conference/Submission8378/Reviewer_ww6B" ], [ "ICLR.cc/2025/Conference/Submission8378/Reviewer_P4ZD" ], [ "ICLR.cc/2025/Conference/Submission8378/Authors" ], [ "ICLR.cc/2025/Conference/Submission8378/Reviewer_vRgg" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ] ], "structured_content_str": [ "{\"summary\": \"The paper investigates weak-to-strong generalization in the setting of an overparameterized spiked covariance model with Gaussian covariates. The paper identifies an asymptotic phase transition between successful and unsuccessful generalization.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"4\", \"strengths\": \"The math appears correct to me; the problem is significant, and desiderata 1 and desiderata 2 make sense.\", \"weaknesses\": \"The paper is rather technical, and the clarity could be improved significantly to make it more readable. (see questions)\", \"questions\": \"1. The main setup is quite confusing to me. The paper first states that \\\"$f_{weak} \\\\in \\\\mathbb{R}^d$\\\" is the object we learn. Normally, the model is a function, not a vector, so this was not immediately clear. It is defined later in line 347 how we learn $ f $, which is quite far from where it was introduced (line 184). It would be better to define that we train $f$ by MNI earlier.\\n\\n2. In line 201, it says, \\\"As a consequence of our main results in Section 3, we will show that the above desiderata are achievable in a simple toy model; see Theorem 3.3 for a formal statement.\\\" However, Theorem 3.3 only considers desiderata 1.2 and 2.1, not the entirety of the desiderata.\\n\\n3. What is \\\"$t$\\\" in Equation (3) of Theorem 3.1?\\n\\n4. The notation $ u, p, q, r $ used is not very intuitive, and it makes the result difficult to interpret. Is there a simpler way to rephrase the result?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Rebuttal\", \"comment\": \"We thank the reviewer for their feedback. Below, we address their comments.\\n> The main setup is quite confusing to me. The paper first states that \\\"$f_{\\\\mathsf{weak}} \\\\in \\\\mathbb{R}^d$\\\" is the object we learn. Normally, the model is a function, not a vector, so this was not immediately clear. It is defined later in line 347 how we learn $f$, which is quite far from where it was introduced (line 184). It would be better to define that we train $f$ by MNI earlier.\\n- We apologize for the confusion with the notation. We have uploaded an updated version where we clarify this, including clarifying how $f$ is trained by MNI earlier. Our goal with the discussion around Line 184 was to encapsulate many different w2s training schemes, but we agree it is helpful to keep a concrete algorithm in mind. \\n\\n> In line 201, it says, \\\"As a consequence of our main results in Section 3, we will show that the above desiderata are achievable in a simple toy model; see Theorem 3.3 for a formal statement.\\\" However, Theorem 3.3 only considers desiderata 1.2 and 2.1, not the entirety of the desiderata.\\n- The two equations in Lines 421-422 contain the conditions for the additional Desiderata being satisfied. In the revision, we have stated more explicitly that Desiderata 1.i-1.iii are all satisfied, and changed the tags to make it more intuitive. In addition, Remark 3.4 discusses Desiderata 2.i and 2.ii. We felt it would be distracting to focus on the bonus desiderata too much, so we moved it to the Remark. We updated the writing near Line 201 to reference Remark 3.4 regarding the bonus desiderata. \\n\\n> What is $t$ in Equation (3) of Theorem 3.1? \\n- $t \\\\in [0, s)$ controls the number of label classes in the multiclass problem: $k = n^t$, following Definition 2. We have updated the wording of the theorem to make this more explicit.\\n\\n> The notation $u,p,q,r$ used is not very intuitive, and it makes the result difficult to interpret. Is there a simpler way to rephrase the result?\\n- We apologize for the confusion. The reason we chose this type of notation is that prior work in this area has used similar conventions (see, e.g., [1-4]). We thought it might be easier to work in log space (where conditions are additive) as opposed to multiplicative, but we can try to add a couple sentences to rephrase things. \\n\\n[1] Wang, K., & Thrampoulidis, C. (2022). Binary classification of gaussian mixtures: Abundance of support vectors, benign overfitting, and regularization. SIAM Journal on Mathematics of Data Science, 4(1), 260-284.\\n\\n[2] Wang, K., Muthukumar, V., & Thrampoulidis, C. (2021). Benign overfitting in multiclass classification: All roads lead to interpolation. Advances in Neural Information Processing Systems, 34, 24164-24179.\\n\\n[3] Muthukumar, V., Narang, A., Subramanian, V., Belkin, M., Hsu, D., & Sahai, A. (2021). Classification vs regression in overparameterized regimes: Does the loss function matter?. Journal of Machine Learning Research, 22(222), 1-69.\\n\\n[4] Wu, D., & Sahai, A. (2024). Precise asymptotic generalization for multiclass classification with overparameterized linear models. Advances in Neural Information Processing Systems, 36.\"}", "{\"summary\": \"In this work, the authors provide theretical justification for the empirically observed phenomenon of weak to strong generalization. In this setting, a weak learner is used to created labelled examples (from unlabelled training data) that is used to further train a stronger model. The intuition is that the weak learner has learnt some useful information about the ground truth and hence the pseudolabels it generates will actually enable generalization. The authors prove that this weak to strong generalization has two phases: (1) when the number of pseudolabelled examples is less than some threshold, the strong learner behaves like a random guesser, (2) beyond the threshold the strong learner achieves perfect generalization. A technically interesting tool that they use is a new lower tail for the max of correlated gaussians which could be of independent interest.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1) This work addresses the important problem of obtaining theoretical justification for a frequently encountered empirical phenomenon\\n2) The lower tail for max of correlated gaussians is an interesting result.\", \"weaknesses\": \"See questions.\", \"questions\": \"21) What the the word \\\"represent\\\" mean in Desiredata 1.(ii).\\n2) What is the significance of the bi-level-ensemble? \\n3) What is $t$ in Theorem 3.1?\\n4) Is there a reason for choosing a halfspace for the ground truth? Does this analysis extend to other concepts. Is there a similar notion for regression (rather than classification)?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Rebuttal\", \"comment\": \"We thank the reviewer for their positive review. Below, we address their questions.\\n\\n> What the the word \\\"represent\\\" mean in Desiredata 1.(ii).\\n- Here we meant that the strong model can perfectly simulate the weak model. In other words, the strong model\\u2019s capabilities are a superset of those of the weak model. For the linear setting, this just means that the weak features are in the span of the strong features.\\n\\n> What is the significance of the bi-level-ensemble?\\n- The bi-level ensemble is a minimal instantiation of a spiked covariance model that fully specifies all the aspects of the covariance of the data, i.e. a toy model for low-rank structure. You can also think of it as a parameterized linear manifold hypothesis. The bi-level ensemble has been studied in previous works as a way to state cleaner theoretical results for benign overfitting [1-4].\\n\\n> What is $t$ in Theorem 3.1?\\n- As mentioned in the response to Reviewer Ns5A, $t \\\\in [0, s)$ controls the number of label classes in the multiclass problem: $k = n^t$, following Definition 2. We have updated the wording of the theorem to make this more explicit.\\n\\n> Is there a reason for choosing a halfspace for the ground truth? Does this analysis extend to other concepts. Is there a similar notion for regression (rather than classification)?\\n- (Shifted) halfspaces are fundamental in the study of classification. For example, even complicated neural network architectures for multiclass / binary classification are predicated on halfspaces, since the last layer is usually a linear head. \\nThe analysis would not directly work beyond the linear setting, as we use the explicit analytic form of the minimum $\\\\ell_2$-norm interpolator. \\n- For regression, the same linear model that gives a halfspace can also be used as a regression model, if one doesn\\u2019t apply softmax. Our analysis should go through in this case: previous work [3] already developed the tools to tightly characterize the regression setting.\\n\\n[1] Wang, K., & Thrampoulidis, C. (2022). Binary classification of gaussian mixtures: Abundance of support vectors, benign overfitting, and regularization. SIAM Journal on Mathematics of Data Science, 4(1), 260-284.\\n\\n[2] Wang, K., Muthukumar, V., & Thrampoulidis, C. (2021). Benign overfitting in multiclass classification: All roads lead to interpolation. Advances in Neural Information Processing Systems, 34, 24164-24179.\\n\\n[3] Muthukumar, V., Narang, A., Subramanian, V., Belkin, M., Hsu, D., & Sahai, A. (2021). Classification vs regression in overparameterized regimes: Does the loss function matter?. Journal of Machine Learning Research, 22(222), 1-69.\\n\\n[4] Wu, D., & Sahai, A. (2024). Precise asymptotic generalization for multiclass classification with overparameterized linear models. Advances in Neural Information Processing Systems, 36.\"}", "{\"title\": \"Rebuttal\", \"comment\": [\"We thank the reviewer for their feedback. We address their questions below.\", \"> Ideally, Theorem 3.3 should be standalone and at the very least, the variables in Theorem 3.3 like $\\\\tau_{\\\\mathsf{weak}}$, $p_{\\\\mathsf{weak}}$ should be defined.\", \"We agree that it is better to have theorem statements be more self-contained. Stating the theorem precisely requires using a decent amount of notation, and to reduce redundancy we have set up the data assumptions and notations explicitly in Theorem 3.2. For example, we have explicitly defined $\\\\tau_{\\\\mathsf{weak}}$ in line 419.\", \"In the revision, we have amended the tags and wording in Theorem 3.3, especially around the two conditions in lines 421-423. It should hopefully be more intuitive what each condition is referring to. Also, we have reinforced the definitions of $\\\\tau_{\\\\mathsf{strong}}$ and $\\\\tau_{\\\\mathsf{weak}}$.\", \"> The notation and current presentation of the result doesn't really make it seem like this is a \\\"simple, toy model\\\", given how many free variables there are to keep track of. One possible fix is to give more intuition and less notation about the toy-model in the main text, and push the details into the Appendix.\", \"We agree that there are many parameters floating around. However, we also believe that this bi-level covariance is essentially the simplest parameterization of low-rank data that can be tractably studied from a theoretical perspective.\", \"In particular, these parameters can be interpreted as follows: $p$ specifies the ambient dimension $d$, $r$ specifies the dimension $s$ of the low-rank subspace, $q$ specifies the strength $a$ of the spike. A full specification of a low-rank data distribution needs to specify these three parameters.\", \"We will try to include some more intuition about the main result in the revision, but we do think it is still important to have a fully precise statement in the main text. That way, a reader can know what is the exact statement being claimed without chance of ambiguity.\", \"> For example, I think it would be really helpful to have an informal, non-rigorous theorem summarizing the main result in the Main Contributions section.\", \"In the main contributions, we have emphasized the informal statement of the main result.\", \"> why one should care about finding a \\\"simple, concrete theoretical setting where we can provably exhibit different phases of weak-to-strong generalization?\\\"\", \"Weak-to-strong generalization is an interesting empirical phenomenon in ML models which might be important for scaling up post-training for foundation models. Human-labeled data is expensive to procure, but there is also no guarantee that using weakly-labeled data would scale well. Our theoretical results give provable guarantees for these kinds of training schemes in a toy setting. Moreover, because the setting is simple and theoretical, it is easier to trace down and explain exactly why the training strategy works.\", \"> what can I take away from this result?\", \"One intuitive takeaway is that weak-to-strong generalization occurs based on how the strong model implicitly \\u201cknows\\u201d that certain latent directions in the weak model\\u2019s activations are more important than others. The weak model makes errors because its representations are low quality, but the key lies in the way its representations relate to the strong model's representations. Presumably, the reason the strong model's representations are better has to do with the way that it was pretrained as well as its own architecture.\", \"Another takeaway regards our insights about multilabel vs multiclass training. In particular, we show that using multilabel weak supervision can do better than multiclass clean supervision! This suggests that in order to get effective weak supervision, whenever possible, one should use the logits / soft labels from the weak model to supervise the strong model. Because we can establish this rigorously in the classification-style setting here, it suggests that we should look for how to do a counterpart of this in the LLM setting (or more generically for generative models). Having access to a weak model for providing supervision opens up more possibilities than present with human-labeled data.\"]}", "{\"metareview\": \"Motivated by recent work by Burns et al., this paper identifies a stylized setting where one can show weak to strong generalization theoretically. The weak model is trained on 'weak features' and this model is used to generate pseudolabels for more (unlabeled data). The strong model is trained with 'strong features' on pseudolabels. The authors show that in this case: (i) the weak model has not much better than random accuracy, and (ii) the strong model has almost optimal accuracy. Furthermore an additional condition that the strong model can fully represent the required model is satisfied. The paper focuses on multi-class classification with min-l2-norm interpolation. While it is nice that there are theoretical results, the reviewers have questioned whether the model is realistic enough to actually capture the phenomena observed in prior empirical work. There are also lots of assumptions that are not always easy to interpret.\", \"additional_comments_on_reviewer_discussion\": \"The reviewers engaged well with the authors.\"}", "{\"comment\": \"Thank you for addressing the concerns. I would like to keep the current score given the limited utility of the developed theory.\"}", "{\"summary\": \"This paper studies a toy-model for weak-to-strong generalization. They show that under the assumptions of their toy-model, two asymptotic phases occur for the student: (1) it is able to successfully generalize or (2) the student resorts to effectively random guessing. The authors also try to extend their results to weak-to-strong multiclass classification and derive new lower tail inequalities for the max of correlated gaussians.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"The paper studies a phenomenon that has been empirically observed and thus relevant to practice\", \"The results and proof techniques seem non-trivial and interesting\"], \"weaknesses\": \"I find the organization and presentation a bit confusing and hard to parse. In particular, Theorem 3.3 is hard to interpret without referencing the Desiradatum outlined in Section 2. Ideally, Theorem 3.3 should be standalone and at the very least, the variables in Theorem 3.3 like $\\\\tau_{weak}, p_{weak}, ...$ should be defined. In addition, and in my opinion, the notation and current presentation of the result doesn't really make it seem like this is a \\\"simple, toy model\\\", given how many free variables there are to keep track of. One possible fix is to give more intuition and less notation about the toy-model in the main text, and push the details into the Appendix. For example, I think it would be really helpful to have an informal, non-rigorous theorem summarizing the main result in the Main Contributions section.\\n\\nIn addition, I am not sure what to take away from this paper. It is nice that you found a toy example, where you can provide rigorous evidence of the empirical phenomena of weak-to-strong generalization. However, I am not convinced this toy model is realistic/relevant to practice, even after reading the Modeling assumptions in the Discussion. In short, it would be nice if the authors can answer:\\n- **why** one should care about finding a \\\"simple, concrete theoretical setting where we can provably exhibit different phases of weak-to-strong generalization?\\\" \\n- what can I take away from this result?\", \"questions\": \"See weaknesses above. It would be nice if the authors can address these.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"I thank the authors for their response. I have increased my score to a 6.\"}", "{\"title\": \"Reply to rebuttal\", \"comment\": \"I thank the authors for their response. I have left my score unchanged\"}", "{\"title\": \"Rebuttal\", \"comment\": \"We thank the reviewer for their comments and address their feedback below.\\n> Most of the important details are pushed into appendix. The main body only contains one useful theorem which identifies a certain condition where condition 1) of weak to strong generalization holds. Setting for condition 2) and multi class settings are merely mentioned as claims. \\n- Condition 2) is essentially Desiderata 2.ii, which is not a core part of the weak-to-strong generalization phenomenon. We mention this merely as a claim because, it essentially just reduces to checking the conditions from Theorem 3.1, i.e. that $\\\\tau_{\\\\mathsf{strong}} < 0$. We have updated Remark 3.4 to make this more explicit. \\n- Regarding the multilabel/multiclass setting (Theorem 3.5), as stated in the text, it requires some additional setup, so due to space constraints we could not elaborate much further on it formally. We will add some comments to convey the intuition of the proof and how it reduces to the binary classification analysis.\\n\\n> The main body also does not provide proof sketch or provide insights into the proof of the theorem.\\n- In the revision, we have moved the proof sketch back into the main text. The brief idea is that, we can analyze the SNR of the true direction $v_*$ when the model is trained on labels generated by another direction $w$. We can use this high level strategy to analyze the generalization of the weak model, as well as the generalization of the weak-to-strong model. \\n\\n> Reduce the introduction - it currently spans 2 pages.\\n- We have condensed the introduction, while still attempting to be fair and thorough with our literature review. \\n\\n> Figure 1 is useless\\n- We have removed Figure 1. \\n\\n> The section on data model was not particularly needed. Page 5 and 6 can be compressed into 1 or 2 paragraphs.\\n- We believe that since the weak-to-strong generalization setup is slightly nonstandard, it can be a little confusing if readers are unfamiliar with the concept. Also, since the result crucially relies on the data assumptions, it was worth being more thorough with the assumptions (perhaps at the risk of being slightly verbose). We have attempted to slightly trim down on the more redundant parts, but think that compressing much more would hamper the paper\\u2019s readability.\\n\\n> Include some experiments in main body.\\n- We have moved some of the MNI experiments from the appendix back into the main text in Figure 2.\"}", "{\"summary\": \"The papers identifies a specific setting under which weak to strong generalization occurs. Consider a strong model that learns a classifier on strong features of the data by supervised learning on $m$ weak labels given by a weak model that was trained on weak features on $n$ clean labels. Then weak to strong generalization implies that\\n\\nCondition 1): The strong model has perfect classification accuracy whereas the weak model has close to random accuracy. \\n\\nCondition 2): The generalization is due to weak labels, i.e. if the strong model was only trained on $n$ clean labels, there is no generalization.\", \"the_setting_is_as_follows\": \"A learner observes features distributed according to a Gaussian distribution, $x \\\\sim N(0, \\\\Lambda)$ where $\\\\Lambda$ is diagonal covariance matrix following a bilevel ensemble parameterization\\n\\\\begin{equation}\\\\lambda_j = \\\\lambda_F = \\\\frac{ad}{s} \\\\text{ for } 1 \\\\leq j \\\\leq s \\\\text{ otherwise } \\\\lambda_j = \\\\lambda_U = \\\\frac{(1-a)d}{d-s}\\\\end{equation}\\nwhere $d = n^p, s= n^r, a = n^{-q}$ and $p > 1; q, r >0; q+ r < p$. For multiclass setting, classes are further scaled as $k = c_k n^t$ for some $t<r$. The strong model observes features given by some $p, q, r$ and weak model observes features characterized through $p_{weak}, q_{weak}, r_{weak}$. In particular the strong features $x_{strong}$ and weak features $x_{weak}$ are given as \\n$$ x_{strong} = N(0, \\\\lambda_F I_{[s]} + \\\\Lambda_U I_{[d]/[s]}) $$\\n$$ x_{weak} = N(0, \\\\lambda_{F, weak} \\\\Pi_S + \\\\Lambda_{U, weak} \\\\Pi_T)$$\\nfor some subsets $S \\\\subseteq [s], T \\\\subseteq [d]/[s]$ and $\\\\Pi_S$ denotes projection onto axis aligned subspace indexed by $S$. $\\\\lambda_{F, weak} = \\\\frac{a_{weak}d_{weak}}{s_{weak}}$ and $\\\\Lambda_{U, weak} = \\\\frac{(1-a_{weak})d_{weak}}{d_{weak}-s_{weak}}$.\\n\\n The true labels are given by $y = \\\\text{sign}(x_1)$ for binary classification and $y = \\\\arg\\\\max_k (x_1, \\\\dots x_K)$ for $K$ way classification. \\n\\n\\n \\nIn this parameterized setting, the authors show that there is a particular regime of number of weak labels $m$ provided by the weak model (for certain regimes of $p, q, r, p_{weak}, q_{weak}, r_{weak}$) where weak to strong generalization occurs (condition 1) holds). The conditions (for binary classification) are given by (assuming $m = n^u$)\\n\\n1. $u + \\\\min(1 -r, p + 1 - 2(q + r)) > q_{weak}+r_{weak} > (p_{weak} + 1)/ 2$\\n2. $p + 1 > (q + r + q_{weak} + r_{weak})$\\n3. $u < (p + 1 + q + r - (q_{weak} + r_{weak})/ 2)$ \\n\\nFurther the classification error of strong learner trained on $n$ cleaned labels is shown to be depend as \\n$$1/2 - 1/\\\\pi \\\\arctan (\\\\Theta(n^{p+1 - 2(q+r)}))$$\\n\\nThus they claim one can identify regimes under which condition 2) also holds (possibly when $p+1 - 2(q+r) << 1$) although no details are provided).\\n\\nFurther they provide an informal claim and details in appendix that there exists some regime for multi class setting.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"Exact characterization of the regime where weak to strong generalization occurs in terms of parameters of the covariance matrix of strong and weak features.\", \"weaknesses\": \"Most of the important details are pushed into appendix. The main body only contains one useful theorem which identifies a certain condition where condition 1) of weak to strong generalization holds. Setting for condition 2) and multi class settings are merely mentioned as claims. The main body also does not provide proof sketch or provide insights into the proof of the theorem.\", \"questions\": \"Suggestions:\\n\\n1. Reduce the introduction - it currently spans 2 pages. \\n2. Figure 1 is useless.\\n3. The section on data model was not particularly needed. Page 5 and 6 can be compressed into 1 or 2 paragraphs.\\n4. Include some experiments in main body.\\n\\nIn general the paper is quite verbose, it can be compressed substantially and content moved back into main body.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}" ] }
4vm6Nn2DW9
Exploring Temporal Semantic for Incomplete Clustering
[ "Zheng Xing", "Weibing Zhao" ]
Clustering data with incomplete features has garnered considerable scholarly attention; however, the specific challenge of clustering sequential data with missing attributes remains largely under-explored. Conventional heuristic methods generally address this issue by first imputing the missing features, thereby making the clustering results heavily reliant on the quality of imputation. In this paper, we introduce a novel clustering framework, termed ETC-IC, which directly clusters incomplete data with rigorous theoretical guarantees, whilst concurrently leveraging temporal semantic consistency to enhance clustering performance. Empirical evaluations demonstrate that the proposed model consistently surpasses current state-of-the-art methods in clustering human motion data.
[ "Temporal semantic", "incomplete clustering", "human motion segmentation" ]
Reject
https://openreview.net/pdf?id=4vm6Nn2DW9
https://openreview.net/forum?id=4vm6Nn2DW9
ICLR.cc/2025/Conference
2025
{ "note_id": [ "s7gpOBnyrQ", "PYHEtMPyZv", "K9l9MPZ8iD", "JyW4lZsGHX", "BSss8Ur30J", "0xGuiELnEC" ], "note_type": [ "official_review", "meta_review", "decision", "official_review", "official_review", "official_review" ], "note_created": [ 1730531842324, 1734327747270, 1737523812334, 1730451694198, 1730668959295, 1730560853576 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission7038/Reviewer_KEMZ" ], [ "ICLR.cc/2025/Conference/Submission7038/Area_Chair_j4mr" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission7038/Reviewer_ivFX" ], [ "ICLR.cc/2025/Conference/Submission7038/Reviewer_m58G" ], [ "ICLR.cc/2025/Conference/Submission7038/Reviewer_42Da" ] ], "structured_content_str": [ "{\"summary\": \"This paper introduces ETC-IC framework, which possesses the capability to seamlessly integrate temporal information while concurrently addressing the challenge of missing data. Firstly, to manage the issue of missing entries, ETC-IC employs an algebraic subspace analysis and develop a theoretically grounded alternative, thereby ensuring accurate clustering even in the presence of incomplete data. Secondly, ETC-IC explores the temporal semantics inherent in sequential data by aligning data points and their temporal assignments through a temporal semantic consistency constraint, thereby ensuring that data points with similar temporal semantics are clustered together. The handling of missing data and the exploration of temporal semantics are unified within a single cohesive framework, thereby demonstrating the adaptability and versatility of the proposed method in addressing incomplete sequential data as required.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. This paper presents a clustering framework distinguished by its remarkable adaptability in addressing the inherent challenges posed by incomplete sequential data.\\n2. We introduce an innovative temporal semantic consistency constraint, which markedly enhances the efficacy of subspace clustering for sequential data.\\n3. We provide a rigorous theoretical analysis, enabling an equivalent approach even in the presence of missing data, whilst effectively exploring temporal semantics.\", \"weaknesses\": \"1. In this paper, while the algorithm has already been exhaustively described and experimentally validated, it is recommended to include an analysis of the algorithm's time complexity to further enhance the completeness.\\n2. it is recommended to incorporate additional evaluation metrics to further strengthen the assessment of its performance.\", \"questions\": \"1. Is the objective function convex, and if so, suggest adding a proof of convergence analysis instead of just giving a figure 6?\\n2. Did the authors use five datasets or four? Why is it four at one time and five at another?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This paper focuses on clustering sequential data with missing attributes. Unlike conventional heuristic methods, this paper leverages temporal information within sequential data, and directly clusters incomplete data under rigorous theoretical guarantees. It employs an algebraic subspace analysis and develops a theoretically grounded alternative. The handling of missing data and the exploration of temporal semantics are unified into a shared framework. Experimental results illustrate the effectiveness.\\n\\nThe experiments are not sufficiently well designed. The original performance of baselines is not compared, and the quantitative analysis is limited. The presentation of experiments is with many errors. Moreover, there is a lack of any temporal clustering methods. The authors directly introduce the formulas without elaborating how the symbols are connected to the data in this work, which leads to it very hard to understand. Besides, the analysis of the algorithm's time complexity is missing. The ablation experiment setup is not detailed. Also, there is no any rebuttal.\", \"additional_comments_on_reviewer_discussion\": \"The experiments are not sufficiently well designed. The original performance of baselines is not compared, and the quantitative analysis is limited. The presentation of experiments is with many errors. Moreover, there is a lack of any temporal clustering methods. The authors directly introduce the formulas without elaborating how the symbols are connected to the data in this work, which leads to it very hard to understand. Besides, the analysis of the algorithm's time complexity is missing. The ablation experiment setup is also not detailed.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"Comments:\\nThe manuscript presents a novel clustering framework, ETC-IC, intended to tackle the issue of clustering sequential data with incomplete features. The topic is of significant contemporary relevance, given the increasing focus on data with missing attributes in the clustering domain.\", \"weaknesses\": \"Weaknesses:\\n1.\\tIt is recommended to provide a summary of the entire work through a framework figure.\\n2.\\tThe model is evaluated solely on the human motion dataset, and thus this work is validated within the human motion domain. Consequently, the current title is not appropriate, authors should revise it to \\u2018human motion learning\\u2019 or evaluate this model in a broader context.\\n3.\\tIn the ABLATION STUDY section, the authors should provide a detailed explanation of the ablation experiment setup and non-temporal semantics to validate the module's effectiveness.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"This paper is well written.\", \"questions\": \"1.\\tIt is recommended to provide a summary of the entire work through a framework figure.\\n2.\\tThe model is evaluated solely on the human motion dataset, and thus this work is validated within the human motion domain. Consequently, the current title is not appropriate, authors should revise it to \\u2018human motion learning\\u2019 or evaluate this model in a broader context.\\n3.\\tIn the ABLATION STUDY section, the authors should provide a detailed explanation of the ablation experiment setup and non-temporal semantics to validate the module's effectiveness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper presents a method for clustering of sequential data with missing values. It incorporates temporal constraints to model the expected continuity across time. It characterises the method theoretically, and evaluates on different motion segmentation benchmark datasets.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"S1. The paper addresses an important problem of handling missing values.\\nS2. The empirical evaluation makes use of five different benchmarking data sets for motion segmentation.\", \"weaknesses\": \"W1. The presentation of the paper is weak, lacking discussion, which makes it hard to follow. Concretely, the scope of the paper is unclear initially, going from clustering of missing features to subspace clustering and back. After that, the method and its properties are presented, without discussing how it relates to existing work, what motivates each step, and whether there are design alternatives.\\nW2. The discussion of related work fails to convey which methods are related in terms of applicability to the problem under study or in terms of method similarities, and which are less closely related. Instead, the related work section lists technical aspects of different methods, without any assessment as to their suitability for the problem under study.\\nW3. The empirical study is weak. In the experimental evaluation, only methods for clustering with missing data are studied, but none for clustering temporal data. As the datasets under study are characterised by strong temporal signals, the competitors are thus very weak baselines.\\nW4. The paper fails to provide sufficient information about the setup in the experiments, and some details about the method are confusing. For example, for the experiment in Fig. 2, linear interpolation is used prior to running the method. This seems to contradict the purpose of the method of being able to handle missing data. Also, it is unclear if linear interpolation was also used prior to running the competitors. This should be clarified in the description, and experiments comparing with and without interpolation should be conducted.\\nW5. The accuracy in several of the experiments is very high, close to 80%, even when half of the data is missing - indicating that the problem might be too easy for any temporal method (as stated above, none of them are considered here). The experimental evaluation should thus include temporal clustering methods as well, possibly using interpolation if necessary (as in W4). More challenging datasets, where missing data has a stronger impact on accuracy should be studied in order to understand the robustness of the method.\\nW6. It is unclear how quickly the method converges in general, only one example is provided for one of the datasets. The paper should provide convergence results across datasets and runs.\", \"questions\": \"Why do you not include any temporal clustering methods (possibly after interpolation)? How do temporal clustering methods perform on this type of data?\\nWhich of the methods in related work are applicable to your evaluation scenario? You could consider including a table that lists core properties, indicating which of them are met by which competitor, instead of (long) textual descriptions.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduced a novel clustering framework called Exploring Temporal Semantic for Incomplete Clustering (ETC-IC), which leverages temporal information within sequential data to enhance the clustering accuracy. Unlike previous works, ETC-IC clusters data without requiring prior imputation, making the results less sensitive to missing data attribute issues. This work valid ETC-IC on 5 human motion benchmarks and the proposed model consistently surpasses current SOTA methods.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"(S1) The work proposes a novel angle to cluster human motion data considering data temporal sequence.\\n(S2) It outperforms the previous baselines when adapted with the \\u2018missing entries\\u2019 data processing proposed by this work.\", \"weaknesses\": \"(W1) Experiment -- The experiments are not sufficiently well designed. Firstly, they adapt the proposed MAR and MNAR directly to the previous baselines without comparing the baselines\\u2019 original performances. Secondly, one of the core concept in this paper: missing entries, are manually designed and generated rather than an off-the-shelf nature existing in the dataset. The results should also show how the model behaves on samples without missing entries to show the model\\u2019s general capacity. Thirdly, the quantitative analysis is poor, where Figure 5 only showcases some samples while Figure 7 is a case study rather than quantitative analysis.\\n\\n(W2) Presentation -- Moreover, the presentation of the experiments is with many errors. Firstly, both Figure 1 and Figure 8 report results on 5 datasets, but the figure caption and paper introduce there are only 4 datasets instead of 5. Secondly, the introduction to Mocap is poor, where it doesn\\u2019t introduce clearly what are the sequence data. Thirdly, Fig 2 (b) and Fig 3 (b) are with low quality. There is a clear obstruction between plot lines and bottom-left frames. Fourthly, the ablation study is reported in a Table but presented as a Figure (Figure 8). Also, the caption is on top of figure 8 while all the other captions for Figures stay in the bottom. \\n\\n(W3) Lack of guidance -- Besides the experiments, it\\u2019s also very hard to comprehend the authors formal derivation, as there are very few intuitive explanations. Without a clear guidance and introduction to the data, the author directly introduces the formulas without elaborating how the symbols are connected to the data in this work. Later on, in the experiment, only a simple ablation study in Figure 8 shows the effectiveness of \\u2018temporal semantics\\u2019, while it doesn\\u2019t analyze the effectiveness of theorem1 to 4 respectively, leaving it unknown which part of the theorem truly works, and which part might fail. Also, the theorem 3 is missing, where it is replaced by \\u2018proposition 3\\u2019.\", \"questions\": \"(Q1) Why are the Figure 3 to Figure 6 only reported on a single dataset while ignoring the other 4 selected datasets?\\n\\n(Q2) Please introduce the data in a more structured way before section 3. A clearer definition should also be introduced to show what do you mean by \\u2018temporal semantics\\u2019 and why they are important, instead of talking about concept in a high-level way.\\n\\n(Q3) Please also include the description of temporal semantic in algorithm 1 to explicitly make people know how it works.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}" ] }
4vPC6Aj6N7
Multi-Agent Reinforcement Learning from Human Feedback: Data Coverage and Algorithmic Techniques
[ "Natalia Zhang", "Xinqi Wang", "Qiwen Cui", "Runlong Zhou", "Sham M. Kakade", "Simon Shaolei Du" ]
We initiate the study of Multi-Agent Reinforcement Learning from Human Feedback (MARLHF), exploring both theoretical foundations and empirical validations. We define the task as identifying Nash equilibrium from a preference-only offline dataset in general-sum games, a problem marked by the challenge of sparse feedback signals. Our theory establishes the upper complexity bounds for Nash Equilibrium in effective MARLHF, demonstrating that single-policy coverage is inadequate and highlighting the importance of unilateral dataset coverage. These theoretical insights are verified through comprehensive experiments. To enhance the practical performance, we further introduce two algorithmic techniques. (1) We propose a Mean Squared Error (MSE) regularization along the time axis to achieve a more uniform reward distribution and improve reward learning outcomes. (2) We propose an extra penalty based on dataset distribution to incorporate pessimism, enhancing stability and effectiveness during training. Our findings underscore the multifaceted approach required for MARLHF, paving the way for effective preference-based multi-agent systems.
[ "multi-agent reinforcement learning", "reinforcement learning with human feedback", "dataset coverage" ]
Reject
https://openreview.net/pdf?id=4vPC6Aj6N7
https://openreview.net/forum?id=4vPC6Aj6N7
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yelwYHZwL5", "x6pkH5PwZN", "wy0tbzJhTS", "sVBv4plS3g", "qO45D07aNR", "q0gX5x0oBB", "mGAUvesclk", "Up8Cbl23WE", "UCwYUZ2NF5", "ToD373CTYZ", "SqtjkyB8zm", "SfQzFBsj1H", "SaQuoj29Lw", "R90X7Tnh8I", "OdWhUNKI9F", "JMlw3Rl3h2", "HPWsA9yGLh", "DVc5tVWTRu", "BMh7OCE70b", "9MptSHq7Yq", "6aqu0evKOL", "4vRscTbCVk", "2OMJ24svHg", "2BIl5Tj0CI", "17uIgLm7lA" ], "note_type": [ "official_comment", "official_review", "official_comment", "official_review", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "meta_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732525772118, 1730626291301, 1732607693207, 1730312611682, 1732674794829, 1737523834828, 1733286748297, 1732525913458, 1732525747783, 1732569814712, 1732563767563, 1733286402713, 1732525017362, 1732524656331, 1732525530606, 1733199278932, 1732525537546, 1730129951610, 1733269183377, 1733936252645, 1732525366370, 1730663880996, 1732525937964, 1732661236393, 1732524273755 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission7373/Authors" ], [ "ICLR.cc/2025/Conference/Submission7373/Reviewer_Zpkc" ], [ "ICLR.cc/2025/Conference/Submission7373/Authors" ], [ "ICLR.cc/2025/Conference/Submission7373/Reviewer_oCgR" ], [ "ICLR.cc/2025/Conference/Submission7373/Reviewer_TKUr" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission7373/Authors" ], [ "ICLR.cc/2025/Conference/Submission7373/Authors" ], [ "ICLR.cc/2025/Conference/Submission7373/Authors" ], [ "ICLR.cc/2025/Conference/Submission7373/Reviewer_TKUr" ], [ "ICLR.cc/2025/Conference/Submission7373/Reviewer_Zpkc" ], [ "ICLR.cc/2025/Conference/Submission7373/Authors" ], [ "ICLR.cc/2025/Conference/Submission7373/Authors" ], [ "ICLR.cc/2025/Conference/Submission7373/Authors" ], [ "ICLR.cc/2025/Conference/Submission7373/Authors" ], [ "ICLR.cc/2025/Conference/Submission7373/Reviewer_oCgR" ], [ "ICLR.cc/2025/Conference/Submission7373/Authors" ], [ "ICLR.cc/2025/Conference/Submission7373/Reviewer_TKUr" ], [ "ICLR.cc/2025/Conference/Submission7373/Authors" ], [ "ICLR.cc/2025/Conference/Submission7373/Area_Chair_uk29" ], [ "ICLR.cc/2025/Conference/Submission7373/Authors" ], [ "ICLR.cc/2025/Conference/Submission7373/Reviewer_1Zjc" ], [ "ICLR.cc/2025/Conference/Submission7373/Authors" ], [ "ICLR.cc/2025/Conference/Submission7373/Reviewer_1Zjc" ], [ "ICLR.cc/2025/Conference/Submission7373/Authors" ] ], "structured_content_str": [ "{\"comment\": \"details in Section 6 of the updated papaer.\\n\\n| Algorithm | Dataset | Spread-v3 | Reference-v3 | Overcooked |\\n|--------------------------|------------------|------------------|------------------|-------------------|\\n| **VDN with Pessimism Penalty** | Diversified | -21.16 \\u00b1 0.54 | -18.89 \\u00b1 0.60 | **238.89 \\u00b1 3.50** |\\n| | Mix-Unilateral | -21.03 \\u00b1 0.44 | -18.80 \\u00b1 0.63 | 221.80 \\u00b1 26.66 |\\n| | Mix-Expert | -20.98 \\u00b1 0.54 | -18.80 \\u00b1 0.44 | 35.26 \\u00b1 55.19 |\\n| | Pure-Expert | -21.01 \\u00b1 0.57 | -28.97 \\u00b1 2.89 | 3.36 \\u00b1 7.19 |\\n| **MAIQL** | Diversified | -25.33 \\u00b1 1.40 | -22.15 \\u00b1 0.55 | **16.59 \\u00b1 11.22** |\\n| | Mix-Unilateral | -23.25 \\u00b1 1.06 | -23.22 \\u00b1 1.37 | 0.00 \\u00b1 0.00 |\\n| | Mix-Expert | -23.26 \\u00b1 0.90 | -24.21 \\u00b1 1.60 | 0.00 \\u00b1 0.00 |\\n| | Pure-Expert | -26.01 \\u00b1 1.53 | -29.47 \\u00b1 1.65 | 0.00 \\u00b1 0.00 |\\n| **MABCQ** | Diversified | -20.02 \\u00b1 0.64 | -17.64 \\u00b1 0.43 | **239.34 \\u00b1 1.67** |\\n| | Mix-Unilateral | -19.47 \\u00b1 0.33 | -17.64 \\u00b1 1.11 | 215.01 \\u00b1 65.43 |\\n| | Mix-Expert | -19.42 \\u00b1 0.17 | -17.88 \\u00b1 0.78 | 50.32 \\u00b1 82.82 |\\n| | Pure-Expert | -20.56 \\u00b1 0.38 | -25.90 \\u00b1 1.11 | 1.14 \\u00b1 3.46 |\"}", "{\"summary\": \"The paper addresses the problem of trying to learn human preferences (this behaviour is better than that behaviour) in a multi agent RL setup. In this case satisfactory learning means a Nash-equilibrium is reached between all policies. The authors positions the paper as an initial study into Multiagent Reinforcement Learning from Human Feedback.\\n\\nThe paper shows how pure expert policies are not always the best for maximising overall score, and that mixing in less expert policies in some cases causes an overall higher score to be reached in the MARLHF case. This is proved, theoretically. They also show that it is often easier to learn what policies score higher by having unilaterally divergent policies acting in the environment, where a single agent is using a sub-optimal policy. The authors call this approach unilateral coverage. By having this unilateral agent in the environment it becomes simpler to observe what policies may be truly optimal within the environment. In addition upper complexity bounds are established for Nash Equilibrium in effective MARLHF.\\n\\nThe process to implement this approach is to learn a reward function from a preference dataset while mitigating extrapolation errors with a pessimism term and then determining a final policy. Human Feedback is itself simulated using the Bradley-Terry-Luce model to rank solutions.\", \"the_authors_make_2_particular_contributions_to_implement_their_insights\": \"Applying MSE regularisation to the training data to distribute rewards more evenly across timesteps, which helps to avoid temporal concentration. This essentially takes the sparse reward signals from the Bradley-Terry-Luce model and spread them out to produce reward over more timesteps.\\nDataset distribution-based penalties are used to constrain exploration to known regions of the state space\", \"their_empirical_evaluation_spans_three_multi_agent_scenarios\": \"cooperative target coverage, coordinated pursuit, and communication-dependent navigation. They show that incorporating imperfect policies is helpful for learning higher scoring policies during training. In harder tasks, unilateral coverage and diversity become more important and more diverse datasets led to lower variance in training outcomes. The authors also introduce a principled standardization technique for hyperparameter tuning across environments.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"In terms of the proofs, there is a simple but convincing proof by counterexample provided for theorem 1 (not contradiction, as stated).\\nThere is an explicit bounds found on the Nash-gap. \\n\\nHyperparameters used in the training are provided, multiple seeds are used and results that don\\u2019t support the desired conclusion are presented. Multiple environments are tested, and clear ablation studies are done.\\n\\nThe paper makes an interesting theoretical contribution by establishing fundamental results about Multi-Agent Reinforcement Learning from Human Feedback (MARLHF). The authors prove why single-policy coverage is insufficient and demonstrate that unilateral coverage is both necessary and sufficient for learning Nash equilibria. These theoretical foundations are presented with clear proofs that are well constructed. These theoretical results then explicitly inform the design of the framework which is clearly stated and explained.\\n\\nThe empirical work is comprehensive and well-designed, testing their approach across three distinct multi-agent scenarios that each present different challenges (cooperative target coverage, coordinated pursuit, and communication-dependent navigation). The experiments validate both the theoretical insights about dataset coverage and the effectiveness of their algorithmic innovations. Their ablation studies are thorough and give clear evidence for the value of their MSE regularization and dataset distribution-related penalties. The authors also introduce a practical standardization technique for hyperparameter tuning that works across different environments.\\n\\nThe clarity of the experimental setup makes the work also highly reproducible\", \"weaknesses\": \"The main weakness is that despite the paper's title and framing, there is no actual human feedback involved in any of the experiments. Instead, the authors simulate preferences using the Bradley-Terry-Luce model based on known reward functions from the environments. This is a significant limitation because real human preferences are likely to be much noisier, inconsistent, and potentially non-transitive compared to their simulated preferences. The paper would be more accurately titled as \\\"Multi-Agent Reinforcement Learning from Simulated Preferences\\\" or similar, and should more explicitly acknowledge this limitation and discuss how their approach might need to be modified for real human feedback.\\n\\nWhile thorough, the theoretical results rely heavily on assumptions that may not hold in practice. The paper assumes linear Markov games and works with known feature mappings, but doesn't discuss enough how these assumptions might limit real-world applicability. Additionally, although the paper proves that their theoretical algorithm converges to Nash equilibria, the practical implementation uses different algorithms (VDN-based) with no theoretical guarantees. This gap between theory and practice is not sufficiently discussed. The paper also doesn't explore whether the Nash equilibrium is actually desirable in all cases - in some scenarios, other solution concepts might better align with human preferences. This again is one of the major weaknesses with the unclear framing.\\n\\nThe experimental evaluation, while systematic, is limited to relatively simple environments in the Multi-Agent Particle Environment (MPE) framework. These environments, while useful for testing basic concepts, are far simpler than real-world multi-agent scenarios. The paper doesn't adequately discuss how their approach might scale to more complex environments or to scenarios with larger numbers of agents. Their results showing that mixed-skill policies can outperform pure expert policies raise questions about whether their reward modeling approach is capturing the true objectives of the tasks. \\n\\nAnother important weakness in the paper's empirical evaluation is the absence of statistical significance testing. Although results with means and standard deviations across 5 random seeds are given, they don't perform any statistical analysis to validate the conclusions. This is particularly problematic given the small sample size - with only 5 seeds, the reliability of their comparisons is questionable. The paper lacks hypothesis tests. This makes it difficult to determine if the reported differences between approaches are statistically significant, especially in cases where the differences appear small relative to their standard deviations. For example, in Spread-v3, it's unclear whether the difference between \\\"Mix-Unilateral\\\" (-20.98 \\u00b1 0.56) and \\\"Mix-Expert\\\" (-21.11 \\u00b1 1.16) is meaningful. The lack of statistical rigor undermines the strength of the paper's empirical conclusions and the claims made about the benefits of their approaches.\", \"questions\": \"How would your approach need to be modified to handle inconsistent or non-transitive preferences that often occur with real human feedback?\\nWhy do you call the paper MARLHF when there is clearly no HF?\\nThe practical implementation differs significantly from the theoretical algorithm - can you explain this gap and discuss whether any theoretical guarantees carry over?\\nGiven the relative simplicity of the tasks, why were only 5 random seeds used for the experiments?\\nWhy weren't statistical significance tests performed to validate the comparative results?\\nHow well does your approach scale with increasing numbers of agents? \\nIn cases where mixed-skill policies outperform pure expert policies, can you verify that this reflects genuine improvement rather than issues with reward modeling?\\nHave you tested MARL algorithms other than VDN?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your positive feedback!\\n\\n#### Additional task\\n\\nSpecifically, the three MPE environments occur in a continuous state space and focus on location control, where agents need to learn non-trivial force control. On the other hand, Overcooked occurs in a discrete state space, emphasizing strategy learning, with a more direct relationship between actions and dynamics. \\n\\nIndeed, selecting an environment that is both simple yet non-trivial is not an easy task. The setup of PbMARL poses significant disadvantages in environments with long episodes and complex reward structures, which can dominate the experimental results and make it difficult to reflect differences brought by the datasets. We will continue exploring suitable environments for further experiments.\\n\\n##### MSE results\\nConsidering the page limitations, we have removed the related results and discussions from the main text. \\n\\nAnother consideration is that, since different reward assigning methods can lead to similar optimal policies and trajectory preferences, the MSE between predicted and actual rewards may not always effectively reflect the quality of the reward model in PbRL. For example, in the Overcooked environment, assigning a reward to **cooking the dish** and assigning a reward to **serving the dish** results in very similar returns, as a complete scoring period involves both operations. However, these two reward functions will have a squared difference of 2.\\nTo avoid potential misunderstandings, we decided to exclude it from the main text. \\n\\nYou can still find the relevant discussions and experimental results in Appendix B.5.\\n\\n| | Spread-v3 | Tag-v3 | Reference-v3 | Overcooked |\\n|----------------|-----------|--------|--------------|------------|\\n| Diversified | 0.434 | 1.46 | 1.19 | 2.04 |\\n| Mix-Unilateral | 0.647 | 1.52 | 1.09 | 1.98 |\\n| Mix-Expert | 0.578 | 1.78 | 1.09 | 2.17 |\\n| Pure-Expert | 0.673 | 1.48 | 2.33 | 1.72 |\"}", "{\"summary\": \"This study introduces Multi-Agent Reinforcement Learning from Human Feedback (MARLHF) to find Nash equilibria from preference-based data with sparse feedback. A key technique in this paper is to use the MSE regularization for uniform rewards and a pessimism-based penalty\\u2014to improve stability and performance, enabling more effective preference-based multi-agent systems.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The theoretical analysis presented in this paper is solid and clear, providing a sound theoretical bound for the proposed method to solve MARLHF. Additionally, the authors conduct various experiments to demonstrate the effectiveness of the proposed method, even when applied to offline datasets lacking uniform coverage.\", \"weaknesses\": \"1. The discussion section on related works is incomplete. The authors should provide a more thorough discussion of recent advancements in MARL and offline RLHF ([1]-[4]). Additionally, the paper emphasizes the importance of incorporating reward regularization in the objective function for the current task. However, similar ideas have been adopted in different contexts and should be discussed carefully ([3]-[6]).\\n\\n2. The current experiments primarily showcase different variants of the proposed methods and include an ablation study. Could the authors include more baseline methods for comparison? Additionally, incorporating more tasks (e.g., five tasks) would strengthen the findings and provide greater convincing power for readers.\\n\\n3. The theoretical analysis currently focuses solely on the linear function approximation setting, which may not be realistic given the use of neural networks in the experiments. Could the authors extend the analysis to accommodate general function approximations, or clarify how the experimental setup meets the requirements of linear function approximation?\\n\\n4. In Line 300, it seems that someone even left comments colored in blue, which may leak the information of the authors. It is suggested that the authors should double-check the submitted draft to avoid this careless mistake.\\n\\n5. In Line 276, the reference to \\\"an approximate Nash equilibrium policy\\\" in the theorem lacks clarity, as it does not illustrate the approximation error in relation to the size of the offline dataset. The authors should expand on the implications of the derived bound and compare their results with existing theoretical findings in the offline RL and MARL literature.\\n\\n\\n[1] Wang, Yuanhao, et al. \\\"Breaking the curse of multiagency: Provably efficient decentralized multi-agent rl with function approximation.\\\" The Thirty Sixth Annual Conference on Learning Theory. PMLR, 2023.\\n\\n[2] Xiong, Nuoya, et al. \\\"Sample-Efficient Multi-Agent RL: An Optimization Perspective.\\\" The Twelfth International Conference on Learning Representations.\\n\\n[3] Liu, Zhihan, et al. \\\"Provably mitigating overoptimization in rlhf: Your sft loss is implicitly an adversarial regularizer.\\\" arXiv preprint arXiv:2405.16436 (2024).\\n\\n[4] Cen, Shicong, et al. \\\"Value-Incentivized Preference Optimization: A Unified Approach to Online and Offline RLHF.\\\" arXiv preprint arXiv:2405.19320 (2024).\\n\\n[5] Mete, Akshay, et al. \\\"Reward biased maximum likelihood estimation for reinforcement learning.\\\" Learning for Dynamics and Control. PMLR, 2021.\\n\\n[6] Xie, Tengyang, et al. \\\"Bellman-consistent pessimism for offline reinforcement learning.\\\" Advances in neural information processing systems 34 (2021): 6683-6694.\", \"questions\": \"1. This paper analyzes the RLHF setting; however, the definition of the performance metric remains unchanged from the RL setting without KL regularization. Could the authors provide further clarification on this?\\n\\n2. Could the authors highlight the novel aspects of the current theoretical analysis that differentiate it from the offline MARL setting?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for the insightful explanation on task selection. The example of dish cooking/serving is intuitive and makes sense to me. Indeed, you can never be too careful when establishing causal relationships.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"Thank you for your thoughtful response. We are pleased to hear that the new examples helped clarify things. We greatly appreciate the reviewer for pointing these out, as it has been instrumental in improving our paper.\"}", "{\"comment\": \"#### W1. The experiments are conducted on a limited range of tasks.\\nThank you for the advice! In the updated paper, experiments on Overcooked are added. For other algorithms, experiments with MAIQL and MABCQ are included. These two algorithms are commonly used in offline MARL papers. See results below and more details in Section 6 of the updated paper.\\n\\n#### Q3 & Q4. Inconsistency between Table 2 and claims.\\nWe have corrected the corresponding statement in the updated version of our paper. \\nThe current data is accurate, but the claim \\\"in more challenging environments...\\\" was based on data from a previous version. We updated the experimental data before submission but mistakenly submitted an outdated text version. \\n\\nAccording to the earlier data, the expert in the tag environment was relatively weaker, and a more diverse dataset indeed resulted in higher returns. However, for fairness, we switched to a stronger expert, which significantly improved the results of imitation learning (high-$\\\\beta$ settings) while reducing the impact of diversity on the improvement of the reward model.\\n\\n#### Q5. Table 3: Why does setting to a magnitude as large as 100 yield such good results?\\nThank you for pointing this out. The original paper missed the explanation. In our actual experiments, we clipped $\\\\beta \\\\log \\\\pi(s,a)$ to $[-10, 1]$, which was not mentioned. The clipping serves to smooth out the differences in penalty terms among behaviors with relatively high density in the dataset. After clipping, excessively high $\\\\beta$ values no longer dominate the loss, and the performance for higher $\\\\beta$ becomes very similar. We have added this clarification in the updated version of the paper.\\n\\n#### Q6. Figure 2: What does the x-axis represent?\\nThe X-axis in Figure 2 represents time, so all the graphs show the reward/predicted reward curves over time. Their integral corresponds to the return.\\n\\n#### Typos in the paper\\nThank you for pointing out the typos in our paper, we have corrected them now:\\n- Figure 1: Replace $\\\\pi_{ref}$ with $\\\\pi_b$.\\n- Line 300: Delete the blue words.\"}", "{\"comment\": \"#### W1. Reward regularization idea in previous works\\nOur work introduces reward regularization specifically within the context of preference-based MARL, addressing challenges due to the complexity of multi-agent interactions and long-horizon dependencies. While different from prior works that use adversarial adjustments [5] or Bellman-consistent pessimism [6] in general RL settings, we use MSE loss for reward regularization. However, we share a similar high-level idea to stabilize reward model training. We have incorporated these insights into the related work section.\\n\\n#### W2. Could the authors include more baseline methods and more tasks for comparison? \\nAs an initial work in preference-based MARL, to the best of our knowledge, there are **no directly comparable traditional approaches**. \\nThe most straightforward comparison is to combine a reward model with offline RL algorithms. In the updated paper, experiments on Overcooked are added. For other algorithms, experiments with MAIQL and MABCQ are also included. These two algorithms are commonly used in offline MARL papers. See results below and more details in Section 6 of the updated paper.\\n\\n#### W3. Could the authors extend the analysis to accommodate general function approximations, or clarify how the experimental setup meets the requirements of linear function approximation?\\nWe agree that our theory based on linear assumptions does not perfectly match our experiments. However, we want to emphasize that our theory provides a fundamental understanding of offline preference-based MARL and guided the experiment design, where we compared datasets with different levels of diversity. Extending our analysis to general function approximation would be an interesting future direction.\\n\\n#### W4. In Line 276, the reference to \\\"an approximate Nash equilibrium policy\\\" in the theorem lacks clarity\\nWe provided the complete theorem in the appendix (Theorem 4 in Line 1074). The approximation error is bounded by the inverse of the covariance norm, which will have an $O(\\\\sqrt{n})$ rate if we have $n$ samples from a fixed distribution. This type of bound is also widely adopted in offline RL literature [1, 2].\\n\\n#### W5 & Q2. The authors should expand on the implications of the derived bound and compare their results with existing theoretical findings in the offline RL and MARL literature. \\nFor preference-based MARL, we extend the analysis to accommodate preference-based datasets, which differ from standard offline datasets with fixed state-action-reward tuples. This leads to different covariance matrices and uncertainty measures compared with standard offline MARL. Additionally, we establish that unilateral coverage is sufficient for learning approximate Nash equilibria, deriving bounds that explicitly account for preference-based dynamics.\\n\\n#### Q1. This paper analyzes the RLHF setting; however, the definition of the performance metric remains unchanged from the RL setting without KL regularization. Could the authors provide further clarification on this?\\nIn our study, we adopt the standard performance metric from traditional RL without incorporating KL regularization. We aim to propose a basic setting for preference-based MARL, aligning with the general cases. As KL regularization is an efficient practical method, it is also natural in RL literature to use a performance metric without additional regularization terms [3, 4]. We acknowledge that incorporating KL regularization can be beneficial in certain contexts, and we plan to explore its integration in future work.\\n\\n#### Typos and writing clarity\\nThank you for pointing out the typos and writing clarity problems. We have removed the blue words. \\nWe have enriched our related work section with the recent advancements in MARL and RLHF as you mentioned.\\n\\n> [1] Jin, Ying, Zhuoran Yang, and Zhaoran Wang. \\\"Is pessimism provably efficient for offline rl?.\\\" International Conference on Machine Learning. PMLR, 2021. \\n> [2] Zhong, Han, et al. \\\"Pessimistic minimax value iteration: Provably efficient equilibrium learning from offline datasets.\\\" International Conference on Machine Learning. PMLR, 2022. \\n> [3] Longyang Huang, Botao Dong, Ning Pang, et al. Offline Reinforcement Learning without Regularization and Pessimism. TechRxiv. June 07, 2024. \\n> [4] Le Lan, Charline, Marc G. Bellemare, and Pablo Samuel Castro. \\\"Metrics and continuity in reinforcement learning.\\\" Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 35. No. 9. 2021. \\n> [5] Mete, Akshay, et al. \\\"Reward biased maximum likelihood estimation for reinforcement learning.\\\" Learning for Dynamics and Control. PMLR, 2021. \\n> [6] Xie, Tengyang, et al. \\\"Bellman-consistent pessimism for offline reinforcement learning.\\\" Advances in Neural Information Processing Systems 34 (2021): 6683-6694.\"}", "{\"comment\": \"Thank you for the clarification and additional results. Most of my concerns are addressed. I've raised my confidence to 3 but decided to keep the score.\\n\\n**About the additional task Overcook**\\n\\nSeems like finding a proper task where the empirical result aligns with the theoretical claims is not a trivial task itself.... The explanation that this task \\\"focuses on strategy learning and demands less on precision\\\" is somewhat also ambigious to me. Maybe there are more tasks in JaxMARL that can demonstrate the importance of unilateral coverage? Hope to see more convicing task performances in your future versions.\\n\\n**About the MSE results**\\n\\nWould you mind explaining why the MSE results in Table 2 are deleted in the updated paper?\"}", "{\"title\": \"Response to rebuttal\", \"comment\": \"Thank you to the authors. I believe that some improvements have been made and I am raising my score accordingly.\"}", "{\"comment\": \"Thank you for your feedback! We greatly appreciate your time and effort in reviewing our paper and are encouraged by your updated evaluation.\"}", "{\"comment\": \"Thank you for your review. Please find our responses to all your comments below.\\n\\n#### W1. W2. & Q1. Testing on more realistic and complex environments and conducting ablation studies with other MARL algorithms\\n\\nThank you for the advice! In the updated paper, we added the Overcooked environment to address the lack of long-duration, strategy-intensive task environments. Furthermore, we additionally tested two more algorithms: Multi-agent IQL (MAIQL) [1] and Multi-agent BCQ (MABCQ) [2]. \\n\\n- Remark 1: The results on Overcooked highlight the importance of dataset diversity and unilateral data. \\n\\n- Remark 2: Since preference-based MARL is already a new and complex setup, we intentionally selected simple implementations for both the environment and the algorithms to avoid introducing additional uncertainties. We did not test on-policy algorithms like PPO because they require the replay buffer to be collected by the currently optimized policy, making them unsuitable for direct application in offline reinforcement learning, which is the focus of our setting. On the other hand, the QMIX algorithm is more suited for environments involving a large number of agents and complex policies.\\nTherefore, for additional experiments with more algorithms, we adopted Multi-agent IQL (MAIQL) [1] and Multi-agent BCQ (MABCQ) [2], which are simpler to implement but commonly used in offline MARL papers under the centralized-training decentralized-execution (CTDE) framework. \\n\\n#### Q2. What challenges do you anticipate on more complex MARL benchmarks?\\nOne of the primary challenges is reward modeling. In more complex environments (e.g., SMAC), especially in longer-horizon, continuous settings, the lack of supervisory signals becomes more apparent. Since reward models learned through preference learning often exhibit significant errors, agents tend to perform poorly in low-robustness tasks that emphasize fine-grained operations. This contrast is evident in the experimental results of MPE and the newly added Overcooked environment: in Overcooked, the agents can generally achieve performance close to the expert level, whereas in the three MPE environments, there is a noticeable gap between the two.\\n\\n#### Q2. How sensitive might performance be to the choice of hyperparameters $\\\\alpha$ and $\\\\beta$?\\nAn ablation study on $\\\\beta$ and $\\\\alpha$ can be found in Section 6 and Appendix B.5.\\nThe choice of $\\\\beta$ is highly robust. Due to clipping, excessively large $\\\\beta$ values will not dominate the entire reward function. As a result, larger $\\\\beta$ values almost never degrade the agent's performance in our experiments. This allows us to increase $\\\\beta$ with relative confidence. Therefore, we generally recommend setting $\\\\beta$ to a value between 10 and 100. \\n| $\\\\beta$ | 0 | 0.1 | 1 | 10 | 100 | \\n|---------------|-----------------|-----------------|-----------------|-----------------|-----------------|\\n| Spread-v3 | -22.56 \\u00b1 1.61 | -22.03 \\u00b1 0.67 | -20.82 \\u00b1 0.53 | -20.46 \\u00b1 0.51 | -20.35 \\u00b1 0.43 | \\n| Tag-v3 | 4.11 \\u00b1 1.66 | 4.25 \\u00b1 0.53 | 10.96 \\u00b1 1.20 | 28.88 \\u00b1 1.02 | 29.53 \\u00b1 1.35 | \\n| Reference-v3 | -19.69 \\u00b1 0.36 | -19.37 \\u00b1 0.53 | -18.89 \\u00b1 0.78 | -18.33 \\u00b1 0.42 | -18.54 \\u00b1 0.46 | \\n| Overcooked | 0.00 \\u00b1 0.00 | 0.00 \\u00b1 0.00 | 149.53 \\u00b1 86.74 | 238.89 \\u00b1 3.50 | **240 \\u00b1 0.00** |\\nThe choice of $\\\\alpha$, however, is more nuanced. Lower $\\\\alpha$ values tend to reduce the reward model's loss, but since smoother curves are often more suitable for learning, higher reward model losses can sometimes lead to better RL training results. Empirically, setting $\\\\alpha$ to 1 gives near-optimal results.\\n\\n| $\\\\alpha$ | 0 | 0.001 | 0.01 | 0.1 | 1 | 10 | 100 | 1000 |\\n|----------------|-------|-------|-------|-------|-------|-------|-------|-------|\\n| Spread-v3 | 0.350 | 0.345 | 0.347 | 0.351 | 0.361 | 0.389 | 0.460 | 0.603 |\\n| Tag-v3 | 0.465 | 0.431 | 0.440 | 0.455 | 0.484 | 0.531 | 0.603 | 0.676 |\\n| Reference-v3 | 0.358 | 0.356 | 0.362 | 0.374 | 0.393 | 0.434 | 0.508 | 0.623 |\\n\\n#### Q3. What policy was used to generate responses for collecting preference feedback?\\nWe used VDN for the three MPE environments and IPPO on Overcooked to train the expert policy. We tested VDN, IPPO, MAPPO, and QMIX on all the environments and took the best agent trained as the expert. Suboptimal trajectories were collected using checkpoints saved during the training of the expert policy.\\nMore details can be found in Section 6.\\n\\n#### Q4. How was the preference feedback collected?\\nWe simulated the preferences with the Bradley-Terry Model. Details can be found in Section 6.\"}", "{\"title\": \"New experiment results\", \"comment\": \"| Algorithm | Dataset | Spread-v3 | Reference-v3 | Overcooked |\\n|--------------------------|------------------|------------------|------------------|-------------------|\\n| **VDN with Pessimism Penalty** | Diversified | -21.16 \\u00b1 0.54 | -18.89 \\u00b1 0.60 | **238.89 \\u00b1 3.50** |\\n| | Mix-Unilateral | -21.03 \\u00b1 0.44 | -18.80 \\u00b1 0.63 | 221.80 \\u00b1 26.66 |\\n| | Mix-Expert | -20.98 \\u00b1 0.54 | -18.80 \\u00b1 0.44 | 35.26 \\u00b1 55.19 |\\n| | Pure-Expert | -21.01 \\u00b1 0.57 | -28.97 \\u00b1 2.89 | 3.36 \\u00b1 7.19 |\\n| **MAIQL** | Diversified | -25.33 \\u00b1 1.40 | -22.15 \\u00b1 0.55 | **16.59 \\u00b1 11.22** |\\n| | Mix-Unilateral | -23.25 \\u00b1 1.06 | -23.22 \\u00b1 1.37 | 0.00 \\u00b1 0.00 |\\n| | Mix-Expert | -23.26 \\u00b1 0.90 | -24.21 \\u00b1 1.60 | 0.00 \\u00b1 0.00 |\\n| | Pure-Expert | -26.01 \\u00b1 1.53 | -29.47 \\u00b1 1.65 | 0.00 \\u00b1 0.00 |\\n| **MABCQ** | Diversified | -20.02 \\u00b1 0.64 | -17.64 \\u00b1 0.43 | **239.34 \\u00b1 1.67** |\\n| | Mix-Unilateral | -19.47 \\u00b1 0.33 | -17.64 \\u00b1 1.11 | 215.01 \\u00b1 65.43 |\\n| | Mix-Expert | -19.42 \\u00b1 0.17 | -17.88 \\u00b1 0.78 | 50.32 \\u00b1 82.82 |\\n| | Pure-Expert | -20.56 \\u00b1 0.38 | -25.90 \\u00b1 1.11 | 1.14 \\u00b1 3.46 |\"}", "{\"comment\": \"Thank you for your review. Please find our responses to your comments below.\\n\\n#### WP1. Q1. & Q2. There is no actual human feedback involved.\\nYou are correct that our current experiments rely on simulated preferences rather than true human feedback. To better reflect this, we will change the title to \\\"Preference-Based Multi-Agent Reinforcement Learning\\\" to clarify our focus.\\n\\n#### WP2. The theoretical results rely heavily on assumptions that may not hold in practice.\\nWe agree that our theory based on linear assumptions does not perfectly match our experiments. However, we want to emphasize that our theory provides a fundamental understanding of offline preference-based MARL and guided the experiment design, where we compared datasets with different levels of diversity. Extending our analysis to general function approximation would be an interesting future direction.\\n\\n#### Q3. The paper proves that their theoretical algorithm converges to Nash equilibria, but the practical implementation uses different algorithms (VDN-based) with no theoretical guarantees.\\nOur theory demonstrates the data coverage conditions fundamentally needed even in the basic linear function approximation setting. For more real-world problems requiring general function approximation, such as neural networks, we chose VDN to solve these problems. Our experiments showed that unilateral coverage and more diversified datasets improve performance, verifying our theoretical insights.\\n\\n#### WP4. & Q4. With only 5 seeds, the reliability of their comparisons is questionable.\\nThank you for raising this point. We have rerun our experiments with 10 seeds and updated the data in the revised version. We observed that the variance in performance across different seeds remains very low, which we believe provides sufficient robustness for our findings.\\n\\n#### WP4. & Q5. The differences appear small relative to their standard deviations.\\nSome of the comparative results indeed lack persuasiveness, such as the examples you mentioned (-20.98 \\u00b1 0.56) and (-21.11 \\u00b1 1.16). As a result, our practice of bolding data with only minor advantages in the table may have been misleading. In the revised version, only comparison results with a p-value less than 0.05 in the significance test are bolded. Additionally, more precise language is used to describe the empirical conclusions.\\n\\n#### Q6. How well does your approach scale with increasing numbers of agents?\\nWe've conducted an experiment to test the scaling problem in Appendix B3 (B4 in the updated version). A brief conclusion is that, while our current approach manages the scaling of agents without introducing new problems, it does not specifically address the inherent issues of instability and complexity that are well-documented in traditional MARL. We added a paragraph discussing this in the main text of the updated paper.\\n\\n#### Q7. In cases where mixed-skill policies outperform pure expert policies, can you verify that this reflects genuine improvement rather than issues with reward modeling?\\nIn preference-based RL, reward modeling is an integral part of the complete algorithm, and our theoretical analysis does not separate reward modeling from the subsequent RL Oracle. For example, in Theorem 4 (line 1074), the approximation error is bounded by the inverse of the covariance norm, which depends on the diversity of the dataset. Therefore, the fact that a mixed dataset can provide a better reward model than a pure dataset is itself a genuine improvement.\\n\\n#### Q8. & WP3. Have you tested MARL algorithms other than VDN? & The experimental evaluation, while systematic, is limited to relatively simple environments in the Multi-Agent Particle Environment (MPE) framework.\\nThank you for the advice! In the updated paper, experiments on Overcooked are added. For other algorithms, experiments with MAIQL and MABCQ are added. These two algorithms are commonly used in offline MARL papers. See results below and more details in Section 6 of the updated paper.\"}", "{\"title\": \"Reply to the Authors\", \"comment\": \"Thank the authors for their replies and my concerns are partially solved. I am still not fully convinced why the current paper does not study the general function approximation setting and compare with other multiagent algorithms such as [1]. Hence, I would keep my score.\\n\\n[1] Yu, C., et al. \\\"The surprising effectiveness of ppo in cooperative, multi-agent games. arXiv 2021.\\\" arXiv preprint arXiv:2103.01955.\"}", "{\"comment\": \"| Algorithm | Dataset | Spread-v3 | Reference-v3 | Overcooked |\\n|--------------------------|------------------|------------------|------------------|-------------------|\\n| **VDN with Pessimism Penalty** | Diversified | -21.16 \\u00b1 0.54 | -18.89 \\u00b1 0.60 | **238.89 \\u00b1 3.50** |\\n| | Mix-Unilateral | -21.03 \\u00b1 0.44 | -18.80 \\u00b1 0.63 | 221.80 \\u00b1 26.66 |\\n| | Mix-Expert | -20.98 \\u00b1 0.54 | -18.80 \\u00b1 0.44 | 35.26 \\u00b1 55.19 |\\n| | Pure-Expert | -21.01 \\u00b1 0.57 | -28.97 \\u00b1 2.89 | 3.36 \\u00b1 7.19 |\\n| **MAIQL** | Diversified | -25.33 \\u00b1 1.40 | -22.15 \\u00b1 0.55 | **16.59 \\u00b1 11.22** |\\n| | Mix-Unilateral | -23.25 \\u00b1 1.06 | -23.22 \\u00b1 1.37 | 0.00 \\u00b1 0.00 |\\n| | Mix-Expert | -23.26 \\u00b1 0.90 | -24.21 \\u00b1 1.60 | 0.00 \\u00b1 0.00 |\\n| | Pure-Expert | -26.01 \\u00b1 1.53 | -29.47 \\u00b1 1.65 | 0.00 \\u00b1 0.00 |\\n| **MABCQ** | Diversified | -20.02 \\u00b1 0.64 | -17.64 \\u00b1 0.43 | **239.34 \\u00b1 1.67** |\\n| | Mix-Unilateral | -19.47 \\u00b1 0.33 | -17.64 \\u00b1 1.11 | 215.01 \\u00b1 65.43 |\\n| | Mix-Expert | -19.42 \\u00b1 0.17 | -17.88 \\u00b1 0.78 | 50.32 \\u00b1 82.82 |\\n| | Pure-Expert | -20.56 \\u00b1 0.38 | -25.90 \\u00b1 1.11 | 1.14 \\u00b1 3.46 |\"}", "{\"summary\": \"The paper seeks to establish theoretical foundations and make empirical validations for the new research field, Multi-Agent Reinforcement Learning from Human Feedback (MARLHF). The core theoretical contribution is proving that single-policy coverage is insufficient for learning approximate Nash equilibrium policies and that unilateral policy coverage is sufficient to do so. The empirical contribution lies in two techniques, namely, reward regularization which smoothens the reward distribution, and dataset distribution-based pessimism which handles the extrapolation errors. The experiments are designed to verify the correctness of the theoretical claims and the effectiveness of the empirical techniques.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"I am not an expert in RLHF, but to my best knowledge, this is the first work for aligning multi-agent systems with human feedback.\", \"The theoretical claims are concise and seems to be practically useful.\", \"The experiments are well designed for the purpose of verifying the proposed theoretical claims and empirical techniques.\"], \"weaknesses\": \"The experiments are conducted on a limited range of tasks, which may not be sufficient to verify the generality of the theoretical claims and empirical techniques.\\n\\nAs far as I can tell, there are no other obvious weaknesses of this paper. Potential weaknesses concerning the consistency between the experiment results and the corresponding conclusions are listed as questions below.\", \"questions\": [\"Figure 1: $\\\\pi_{ref}$, while mentioned in the caption, doesn't seem to be appearing in the figure. Do you mean $\\\\pi_b$?\", \"What does the blue text mean in Lines 300-301?\", \"Table 2: The claim in the capture, namely, \\\"in more challenging environments, such as Tag-v3, dataset diversity plays a substantially more significant role\\\", seems inconsistent with the data in the table, where both the mean and the variance of the return of Tag-v3 reach their best in the Pure-Expert dataset which has the least diversity.\", \"Table 2: The claim in Lines 419-420, namely, \\\"In more challenging tasks, as reflected by higher MSE, the importance of unilateral coverage and diversity becomes more pronounced.\\\", does not seem very obvious from the table, where the diversified and the mix-unilateral dataset achieve the best performance when (Spread-v3 for Mix-unilateral and Reference-v3 for Diversified) the corresponding MSE is low.\", \"Table 3: Why does setting $\\\\beta$ to a magnitude as large as 100 yield such good results? Doesn't the penalty term completely dominate the loss? Further, it seems strange to me that setting $\\\\beta$ across such a wide range (from 1 to 100) can yield almost the same result, especially when the dataset is the diversified one which contains a large fraction of low-return trajectories.\", \"Figure 2: What does the x-axis represent?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your feedback!\\n#### General function approximation\\nWe believe that extending our result to general function follows the standard treatment of replacing linear function approximation with general function approximation. For example, it is straightforward to adapt the analysis for general function approximation in [1] to our framework.\\n#### Comparison with other multi-agent algorithms like MAPPO\\nWe did not test on-policy algorithms like PPO because they require the replay buffer to be collected by the currently optimized policy, making them unsuitable for direct application in offline reinforcement learning. \\n\\n[1] Zhang et al. \\\"Offline Learning in Markov Games with General Function Approximation.\\\" https://arxiv.org/pdf/2302.02571\"}", "{\"metareview\": \"This paper investigates offline multi-agent RL with preference data. Theoretical study establishes the insufficiency of the single policy coverage and demonstrates the need for unilateral dataset coverage. From the empirical side, a MSE reward regularizer and a pessimistic penalty are proposed. While the reviewers acknowledge the theoretical contribution, I have the following concerns about the work.\\n\\n1. Almost all the empirical results in the three small problems are not statistically significant. Essentially, this means that there is only one single environment in this paper -- overcooked. But as suggested by 1Zjc, JaxMARL does have other environments so I feel this paper at least needs a major revision to test on one more environment.\\n\\n2. I failed to find any evidence that the MSE reward regularizer actually contributes positively to the final performance. In Table 4, it shows that the method has 240 scores in overcooked even with $\\\\alpha=0$. In the reply to 1Zjc, there is an ablation study of $\\\\alpha$ and $\\\\beta$. But only $\\\\beta$ is studied in overcooked. $\\\\alpha$ is not. So I cannot find any evidence supporting the effectiveness of $\\\\alpha$, the MSE reward regularizer. Figure 2(a) does not involve overcooked as well, which is the only env that this paper really should considers. The authors did not exclude the possibility that Figure 2(a) works because this regularizer overfits to the three small tasks.\\n\\n3. The theory and the two regularizers (MSE reward regularizer and pessimistic penalty) are entirely disconnected. I cannot see sufficient theoretical justification / motivation for those two regularizers. This is not a problem if the two work well. But again, this paper essentially has only 1 env and I cannot find evidence that the MSE reward regularizer actually matters in overcooked.\\n\\nTo summarize, I think the theoretical findings of this paper are interesting and promising. But the paper lacks sufficient empirical study to support the proposed regularizers. In fact, overcooked is only added during the rebuttal period. I feel the empirical part of the paper is not ready and needs at least a major revision. The paper may also benefit from connecting the two regularizers more with the proposed theory. I'm fine with doing theories with linear function approximation and doing experiments with neural networks. But the disconnection between theories and experiments in this submission now is far beyond linear function approximation and neural networks. \\n\\nIt is also worth mentioning that the authors accidentally removed the appendix entirely in the rebuttal period so I can only use the original appendix.\", \"additional_comments_on_reviewer_discussion\": \"Before the AC-reviewer discussion period, this paper has 6666. I checked all the comments and read the paper myself and raised the three concerns during AC-reviewer discussion. Reviewers are convinced by my arguments and lowered their scores accordingly. And no reviewer is able / willing to argue for acceptance.\"}", "{\"comment\": \"#### Q5. The inherent dependence between the policy used to train the reward model and the policy being learned.\\n\\nThe PARL paper you mentioned and the problem setting in this paper are not entirely the same. In the PARL paper, the agent performs online learning at the end, whereas in our paper, learning is restricted to offline training with a fixed dataset.\\n\\nIn offline reinforcement learning, since there are no performance guarantees outside the dataset, the learned policy is inherently constrained within the dataset, meaning it is \\\"inherently dependent on the offline dataset.\\\" In other words, because both the reward model training and the agent training use the same dataset, we can expect that for states where the agent achieves a low Bellman error, the reward can also be well-estimated. Therefore, this issue does not exist in offline preference-based MARL.\\n\\n#### Q6. How does the quality of the learned reward function vary with different levels of expertise and sparsity in preference feedback?\\n\\nIn our experiments, under the condition of maintaining the same level of diversity, higher-quality expert data resulted in better reward models. This is because the \\\"positive features\\\" in demonstrations from better experts are more prominent, with fewer irrelevant noise or erroneous demonstrations. Specifically, using the return difference between the final trained agent and the expert as a standard, suboptimal experts lead to significantly larger discrepancies compared to the best experts. Therefore, we primarily used the most expert-level demonstrations in our main experiments.\\n\\nSince we only have preferences between complete trajectories, the sparsity of preference feedback is fixed at 1 preference per episode pair.\\n\\n| Algorithm | Dataset | Spread-v3 | Reference-v3 | Overcooked |\\n|--------------------------|------------------|------------------|------------------|-------------------|\\n| **VDN with Pessimism Penalty** | Diversified | -21.16 \\u00b1 0.54 | -18.89 \\u00b1 0.60 | **238.89 \\u00b1 3.50** |\\n| | Mix-Unilateral | -21.03 \\u00b1 0.44 | -18.80 \\u00b1 0.63 | 221.80 \\u00b1 26.66 |\\n| | Mix-Expert | -20.98 \\u00b1 0.54 | -18.80 \\u00b1 0.44 | 35.26 \\u00b1 55.19 |\\n| | Pure-Expert | -21.01 \\u00b1 0.57 | -28.97 \\u00b1 2.89 | 3.36 \\u00b1 7.19 |\\n| **MAIQL** | Diversified | -25.33 \\u00b1 1.40 | -22.15 \\u00b1 0.55 | **16.59 \\u00b1 11.22** |\\n| | Mix-Unilateral | -23.25 \\u00b1 1.06 | -23.22 \\u00b1 1.37 | 0.00 \\u00b1 0.00 |\\n| | Mix-Expert | -23.26 \\u00b1 0.90 | -24.21 \\u00b1 1.60 | 0.00 \\u00b1 0.00 |\\n| | Pure-Expert | -26.01 \\u00b1 1.53 | -29.47 \\u00b1 1.65 | 0.00 \\u00b1 0.00 |\\n| **MABCQ** | Diversified | -20.02 \\u00b1 0.64 | -17.64 \\u00b1 0.43 | **239.34 \\u00b1 1.67** |\\n| | Mix-Unilateral | -19.47 \\u00b1 0.33 | -17.64 \\u00b1 1.11 | 215.01 \\u00b1 65.43 |\\n| | Mix-Expert | -19.42 \\u00b1 0.17 | -17.88 \\u00b1 0.78 | 50.32 \\u00b1 82.82 |\\n| | Pure-Expert | -20.56 \\u00b1 0.38 | -25.90 \\u00b1 1.11 | 1.14 \\u00b1 3.46 |\\n\\n\\n> [1] Kostrikov et al., Offline reinforcement learning with implicit Q-learning. https://arxiv.org/abs/2110.06169 \\n> [2] Fujimoto et al., Off-policy deep reinforcement learning without exploration. https://arxiv.org/abs/1812.02900\"}", "{\"summary\": \"This paper investigates the important and timely problem of multi-agent reinforcement learning from human feedback (MARLHF). The authors examine both theoretical and practical aspects of MARLHF, demonstrating that single policy coverage is insufficient and emphasizing the need for unilateral dataset coverage. To address the issues of sparse and spiky reward learning typical in standard RLHF, they propose two primary techniques: (1) mean squared error regularization to promote uniform reward distribution, and (2) an additional reward term based on state-action pair density within the dataset to introduce pessimism, using an imitation learning-based approach for density modeling. The final policy is then trained using the VDN algorithm. Overall, this MARLHF approach represents a significant step toward preference-based reinforcement learning in multi-agent systems.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"This paper makes novel contributions to RLHF within multi-agent systems by framing the task as finding a Nash equilibrium in general-sum games and introducing innovative techniques for reward regularization and dataset distribution-based pessimism.\", \"The theoretical results are comprehensive and well-justified, effectively supporting the paper\\u2019s claims.\", \"The paper is generally well-written and easy to follow.\"], \"weaknesses\": [\"The empirical validation of the approach is limited, as the paper only includes experiments on three simple MPE environments. Since the authors utilized JAXMARL, testing on more realistic and complex environments from the JAXMARL API, such as Overcooked, Hanabi, or StarCraft, would strengthen the paper\\u2019s claims.\", \"The comparison with MARL baselines is insufficient, focusing only on VDN despite its known limitations in representation capacity. Conducting ablation studies with other MARL algorithms, such as MAPPO[1], IPPO[2], and QMIX[3], would provide more validations.\"], \"questions\": \"1. Why was VDN specifically chosen as the base MARL algorithm, given its known limitations in representation capacity? How would the proposed approach perform with more advanced MARL algorithms like MAPPO, IPPO, or QMIX?\\n2. Given that the experiments were conducted only on MPE environments (Spread-v3, Tag-v3, Reference-v3), how would the method perform on more complex MARL benchmarks? What challenges do you anticipate, and how sensitive might performance be to the choice of hyperparameters $\\\\alpha$ and $\\\\beta$?\\n3. What policy was used to generate responses for collecting preference feedback?\\n4. How was the preference feedback collected? Was it synthetic, based on true environment rewards, or did it come from real human preferences? These details are crucial for reproducibility, a deeper understanding of the approach, and identifying potential biases in the preference data.\\n5. The inherent dependence between the policy used to train the reward model and the policy being learned is not addressed in the paper. For instance, in the single-agent setting (see [4]), this dependence can be significant. How does the proposed approach handle this issue?\\n6. How does the quality of the learned reward function vary with different levels of expertise and sparsity in preference feedback?\\n\\n[1] Yu, Chao, et al. \\\"The surprising effectiveness of ppo in cooperative multi-agent games.\\\" Advances in Neural Information Processing Systems 35 (2022): 24611-24624.\\n\\n[2] De Witt, Christian Schroeder, et al. \\\"Is independent learning all you need in the starcraft multi-agent challenge?.\\\" arXiv preprint arXiv:2011.09533 (2020).\\n\\n[3] Rashid, Tabish, et al. \\\"Monotonic value function factorisation for deep multi-agent reinforcement learning.\\\" Journal of Machine Learning Research 21.178 (2020): 1-51.\\n\\n[4] Chakraborty, Souradip, et al. \\\"PARL: A Unified Framework for Policy Alignment in Reinforcement Learning from Human Feedback.\\\" The Twelfth International Conference on Learning Representations.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"| Algorithm | Dataset | Spread-v3 | Reference-v3 | Overcooked |\\n|--------------------------|------------------|------------------|------------------|-------------------|\\n| **VDN with Pessimism Penalty** | Diversified | -21.16 \\u00b1 0.54 | -18.89 \\u00b1 0.60 | **238.89 \\u00b1 3.50** |\\n| | Mix-Unilateral | -21.03 \\u00b1 0.44 | -18.80 \\u00b1 0.63 | 221.80 \\u00b1 26.66 |\\n| | Mix-Expert | -20.98 \\u00b1 0.54 | -18.80 \\u00b1 0.44 | 35.26 \\u00b1 55.19 |\\n| | Pure-Expert | -21.01 \\u00b1 0.57 | -28.97 \\u00b1 2.89 | 3.36 \\u00b1 7.19 |\\n| **MAIQL** | Diversified | -25.33 \\u00b1 1.40 | -22.15 \\u00b1 0.55 | **16.59 \\u00b1 11.22** |\\n| | Mix-Unilateral | -23.25 \\u00b1 1.06 | -23.22 \\u00b1 1.37 | 0.00 \\u00b1 0.00 |\\n| | Mix-Expert | -23.26 \\u00b1 0.90 | -24.21 \\u00b1 1.60 | 0.00 \\u00b1 0.00 |\\n| | Pure-Expert | -26.01 \\u00b1 1.53 | -29.47 \\u00b1 1.65 | 0.00 \\u00b1 0.00 |\\n| **MABCQ** | Diversified | -20.02 \\u00b1 0.64 | -17.64 \\u00b1 0.43 | **239.34 \\u00b1 1.67** |\\n| | Mix-Unilateral | -19.47 \\u00b1 0.33 | -17.64 \\u00b1 1.11 | 215.01 \\u00b1 65.43 |\\n| | Mix-Expert | -19.42 \\u00b1 0.17 | -17.88 \\u00b1 0.78 | 50.32 \\u00b1 82.82 |\\n| | Pure-Expert | -20.56 \\u00b1 0.38 | -25.90 \\u00b1 1.11 | 1.14 \\u00b1 3.46 |\"}", "{\"comment\": \"Thank you for your thorough and detailed response. Authors have addressed most of my concerns, and I am happy to increase my scores to reflect the improvements made.\"}", "{\"title\": \"General Response\", \"comment\": \"### **Revisions in the Updated Paper**\", \"revisions_are_marked_in_blue_in_the_updated_paper\": \"- **\\\"MARLHF\\\"** is replaced with **\\\"PbMARL\\\" (Preference-based MARL)**.\\n- The **Related Work** section is enriched.\\n- Missing details about **dataset distribution-based pessimism** are added.\\n- All the experiments in the original paper are **rerun** with 10 seeds.\\n- Experiments in the **Overcooked** environment and corresponding descriptions are added.\\n- Experiments with **MABCQ** and **MACQL** and their corresponding descriptions are added.\\n- Comments on **empirical results** are updated based on the new experiments.\\n- A paragraph about scalability is added to the **Experiments** section.\\n- **Typos are fixed**.\\n\\n---\\n\\n### **Acknowledgment of Reviewers' Feedback**\\n\\n**We thank all the reviewers for their insightful and constructive feedback!** \\nWe are encouraged by their positive evaluation of our work and their recognition of our contributions. We appreciate the reviewers' acknowledgment of our strong theoretical framework, including novel contributions to modeling MARLHF and establishing a solid theoretical foundation (1Zjc, Zpkc, oCgR, TKUr). \\n\\nWe are particularly pleased that the reviewers found our empirical validation comprehensive and well-designed, with experiments that align with and verify our theoretical claims (Zpkc, oCgR, TKUr). The effective empirical techniques we proposed, such as reward regularization and dataset distribution-based pessimism, were also noted for their impact on stabilizing learning and improving performance (1Zjc, oCgR). \\n\\nWe have carefully considered all the feedback and suggestions provided, addressing specific reviewer comments below, and will incorporate their valuable suggestions into our paper.\\n\\n---\\n\\n### **Response to a Common Concern: Preference Feedback**\\n\\nAs multiple reviewers (1Zjc, Zpkc) have asked about details regarding preference feedback, particularly the lack of human feedback, we address this inquiry here.\\n\\n#### **Title Change**\\nFirst of all, to clarify our focus, we have changed our title to **\\\"Preference-Based Multi-Agent Reinforcement Learning: Data Coverage and Algorithmic Techniques\\\"**.\\n\\n#### **Using a Gold Reward Model**\\nUsing a gold reward model when constructing the dataset is a standard practice in the RLHF literature. An observation in [1] shows that first learning a proxy reward and then passing it into the BT model for pseudo-labeling to construct a new dataset outperforms directly learning from offline data. Section 6.1 of [2] describes an environment where offline data is directly constructed using a reward function. Similarly, [3] conducts experiments on datasets constructed from a reward model. \\n\\nAs a first step to advancing our understanding of MARLHF, our work primarily focuses on **bridging theories with experiments**, given limited computational resources. We agree that it is a valuable direction to construct datasets from more complex tasks, such as language modeling using preferences from human or AI feedback, and we leave this for future work.\\n\\n#### **Modeling Non-Transitive Preferences**\\nOne of our key techniques, **reward regularization**, is specifically designed for a reward-based preference setting. Without this technique, it is challenging to handle trajectory data in Markov games due to long horizons, making accurate credit assignment infeasible. Even in RLHF for Markov decision processes, it is difficult to address non-transitive preferences. For example, [4] models non-transitive preferences using a min-max game. Extending this to Markov games is non-trivial and requires careful design.\\n\\n---\\n\\n### **New Experiments**\\n\\nAs all reviewers mentioned the limited number of environments tested, and multiple reviewers (1Zjc, oCgR) requested comparisons with more algorithms, we have significantly updated the experiment section with new environments and algorithms.\\n\\nWe reran all experiments with **10 different seeds**, added a new environment (**Overcooked**), and introduced two additional algorithms (MAIQL and MABCQ). The main results are summarized in the table below. MAIQL and MABCQ are the CTDE versions of IQL [5] and BCQ [6], respectively. The results strongly support our claims regarding the importance of **data diversity** and **unilateral data**.\\n\\nFor more detailed discussions and results, please refer to **Table 2** and **Table 3** in the updated paper.\\n> [1] Xiong et. al., Iterative preference learning from human feedback: Bridging theory and practice for RLHF under KL-constraint. \\n> [2] Song et. al., The Importance of Online Data: Understanding Preference Fine-tuning via Coverage. \\n> [3] Tajwar et. al., Preference Fine-Tuning of LLMs Should Leverage Suboptimal, On-Policy Data. \\n> [4] Swamy et. al., A Minimaximalist Approach to Reinforcement Learning from Human Feedback.\\n> [5] Kostrikov et. al., Offline reinforcement learning with implicit q-learning. \\n> [6] Fujimoto et. al. Off-policy deep reinforcement learning without exploration.\"}" ] }
4v4nmYWzBa
REVISITING MULTI-PERMUTATION EQUIVARIANCE THROUGH THE LENS OF IRREDUCIBLE REPRESENTATIONS
[ "Yonatan Sverdlov", "Ido Springer", "Nadav Dym" ]
This paper explores the characterization of equivariant linear layers for representations of permutations and related groups. Unlike traditional approaches, which address these problems using parameter-sharing, we consider an alternative methodology based on irreducible representations and Schur’s lemma. Using this methodology, we obtain an alternative derivation for existing models like DeepSets, 2-IGN graph equivariant networks, and Deep Weight Space (DWS) networks. The derivation for DWS networks is significantly simpler than that of previous results. Next, we extend our approach to unaligned symmetric sets, where equivariance to the wreath product of groups is required. Previous works have addressed this problem in a rather restrictive setting, in which almost all wreath equivariant layers are Siamese. In contrast, we give a full characterization of layers in this case and show that there is a vast number of additional non-Siamese layers in some settings. We also show empirically that these additional non-Siamese layers can improve performance in tasks like graph anomaly detection, weight space alignment, and learning Wasserstein distances.
[ "deep weight spaces", "permutation equivariance", "irredicible representations." ]
Accept (Poster)
https://openreview.net/pdf?id=4v4nmYWzBa
https://openreview.net/forum?id=4v4nmYWzBa
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zff5Si2Wmj", "vH4donDwQT", "ojg9bbbbkv", "hSEtBl2IeI", "c2gKejjRfo", "YzrWhhIl6m", "WeAYyP8aEP", "Oc1CyGKqGm", "N2ms9vYJvU", "Jn0GrWe9wC", "FuKFipf0yM", "BjVINM1jLn", "BEA2LHO2Pu", "7N0yREca5S", "2Vu3GBfD5L" ], "note_type": [ "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "meta_review", "official_review", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732448480557, 1730551514143, 1730673843391, 1732563416633, 1732187413713, 1732188416379, 1729176486964, 1734543721830, 1730731377792, 1732344412005, 1732187110026, 1737523513379, 1732188986887, 1732714929673, 1732187861378 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission2593/Reviewer_mzEE" ], [ "ICLR.cc/2025/Conference/Submission2593/Reviewer_mzEE" ], [ "ICLR.cc/2025/Conference/Submission2593/Reviewer_vHFp" ], [ "ICLR.cc/2025/Conference/Submission2593/Reviewer_vHFp" ], [ "ICLR.cc/2025/Conference/Submission2593/Authors" ], [ "ICLR.cc/2025/Conference/Submission2593/Authors" ], [ "ICLR.cc/2025/Conference/Submission2593/Reviewer_QTpm" ], [ "ICLR.cc/2025/Conference/Submission2593/Area_Chair_D3Vs" ], [ "ICLR.cc/2025/Conference/Submission2593/Reviewer_iEGE" ], [ "ICLR.cc/2025/Conference/Submission2593/Reviewer_QTpm" ], [ "ICLR.cc/2025/Conference/Submission2593/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission2593/Authors" ], [ "ICLR.cc/2025/Conference/Submission2593/Authors" ], [ "ICLR.cc/2025/Conference/Submission2593/Authors" ] ], "structured_content_str": [ "{\"comment\": \"I thank the authors for their responses and maintain my positive assessment and score.\"}", "{\"summary\": \"The paper introduces a novel methodology for characterizing equivariant linear layers for permutation representations, utilizing classical results from representation theory. Specifically, it provides an alternative characterization of equivariant linear layers for DeepSets, $2$-IGNs, and DWSNets, as well as the first comprehensive characterization of equivariant linear layers for unaligned symmetric elements. Importantly, the authors identify novel non-Siamese layers and empirically assess their impact.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"Clear presentation and notation, supported by rigorous proofs.\", \"The methodology is both valuable and simple, with potential to generalize beyond the examples presented.\", \"A novel and complete characterization of representations for unaligned symmetric elements.\"], \"weaknesses\": [\"Lacks discussion on extending the approach to groups and representations beyond the few presented.\", \"In particular, an appropriate discussion on characterizing the more expressive layers of $k$-IGNs for $k>2$ is missing.\"], \"questions\": \"1. I find the methodology presented in L135-155 valuable to the research community due to its generalizability beyond the provided examples, most of which are already characterized. For this reason, would it be possible to add a *brief* discussion on the generalization of this methodology to strengthen the impact of this contribution and broaden its relevance to a wider community? See the following for more specific questions.\\n2. Computing a basis compatible with the irreducible representation decomposition can be challenging. Does this difficulty limit the methodology\\u2019s generalization? Are there similar technical challenges for characterizing $k$-IGN layers for $k > 2$?\\n3. Can this methodology be applied to other groups beyond $S_n$ and wreath products? If so, could you briefly provide a few examples?\\n4. Representations of the symmetric group are relevant in machine learning and its irreducible representation are absolutely irreducible. In contrast, other relevant groups, such as finite cyclic groups, have real irreducible representations that are not absolutely irreducible. Could the framework presented here be extend to these cases? What potential challenges do you envision in extending to non-absolutely irreducible representations?\\n5. Could you elaborate on the future directions for $k$-IGNs presented in the conclusions (L537-539)?\\n\\n**Minor Issues (No Impact on Recommendation):**\\n- L073: I recommend specifying \\\"$2$-IGNs\\\" for transparency.\\n- L183: Is the presentation of $P_\\\\tau$ unnecessary?\\n- L340: The wreath product of groups is introduced but not defined in detail; as this operation is uncommon in machine learning literature, additional explanation would benefit Section 5. Also, consider demonstrating that equation 7 forms a linear representation of this group, perhaps in the appendix.\\n- L420: Typo, \\u201cis prove\\u201d.\\n- L379 and L1030: I cannot understand why $\\\\mathcal{V}^k$ is an irreducible representation of $\\\\mathcal{G}^k$; is it instead irreducible for $\\\\mathcal{G} \\\\wr S_n$?\\n- L1040: The closing curly bracket is missing.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper studies equivariant linear layers for representations of permutations and related groups from a novel irreducible representations perspective. The authors provide an alternative derivation for models including DeepSets, 2-IGN, and Deep Weight Space (DWS) networks. The theory is then extended to unaligned symmetric sets, showing that there is a vast number of additional non-Siamese layers in certain settings. Experiments show that additional non-Siamese layers improve the performance in tasks like graph anomaly detection, weight space alignment, and learning Wasserstein distances.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The paper offers the irreducible representations perspective for deriving classical models like DeepSets, 2-IGN and DWS networks. Some derivations are simpler than the original ones. The writing is clear and easy to follow. I check with the details and they are sound.\", \"weaknesses\": [\"While the new derivations align with original methods, the resulting models are not new. The concept of ``irreducible representation'' is also well studied, so the contribution of the paper lies mainly in bridging two topics, which is interesting but natural. In particular for equivariant graph layers, the authors only provide derivations for 2-IGN. As admitted in the limitation section, the paper does not involve higher-order $k$-IGN. The author should explain whether their method is broadly applicable for these networks based on tensor representations, or need case-by-case derivations.\", \"Although this is a theoretical paper, the experiments could be improved. More baselines and more real-world tasks are strongly encouraged.\"], \"questions\": [\"Can the method be generalized to higher-order $k$-IGN in a principled manner? Can you briefly describe the claim that ``using irreducibles could lead to new equivariant models with intermediate irreducible features of lower dimensions''?\", \"Can you conduct more experiments on real-world and large-scale datasets, and include more baseline? In addition, can you intuitively explain why non-Siamese layers help in these tasks?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"I thank the authors for their response. Although there are still some unaddressed concerns, I agree that the paper has many contributions and thus will keep my rating.\"}", "{\"title\": \"Answer to review\", \"comment\": \"We thank the reviewer for their thoughtful comments and time.\\n\\n**W1:**\\n\\n \\u201cThe presentation and flow of the paper could be improved. The claims and results are challenging to follow, which may limit the broader audience\\u2019s ability to appreciate the work.\\u201d\\n\\n**A1:**\\nNote that all other reviewers were very positive about the paper's clarity. If there are any specific points you think we should clarify, we will be happy to do so. \\n\\n**W2:**\\nThe paper\\u2019s contributions lack clarity. The paper offers an irreducible-based derivation for existing results and characterizes equivariant functions on unaligned symmetric elements, but the impact and relevance of these contributions remain unclear. It is not evident how these results benefit the design of novel architectures or enhance our understanding of current ones. This limits the significance of the work and may fall short of ICLR\\u2019s standards.\\n\\n**A2:**\", \"our_theoretical_contribution_is_divided_into_two_parts\": \"revisiting existing results and new results for sets of unaligned symmetric elements. In particular, the characterization of DWS layers is very challenging to understand using other methods and requires many bookkeeping and scenario splitting. Our DWS derivation is very simple to understand. We believe this will be helpful for researchers working on improving architectures in this emerging topic.\\n\\nThe second result for sets of unaligned symmetric elements is completely new and arises in many real-world scenarios, including graph anomaly detection, Learning Wasserstein distance and Weight space alignment problems discussed in the paper. Other examples not discussed in the paper include learning ICP-like metrics [1] or graph matching [2]. The main insight in this paper, for these problems, is that besides the commonly used siamese structrure, there can be a considerable number of non-siamese layers which respect the problem\\u2019s equivariant structure, and characterizing all these layers.\\n\\n**W3:**\\nThe empirical evaluation is limited, and the results are not compelling. Using synthetic data for anomaly detection does not sufficiently demonstrate the method\\u2019s practical applicability, as the task is relatively unchallenging and does not show the strengths of the proposed approach.\\u201d\\n\\n**A3:**\\nWhile the anomaly detection experiment is indeed synthetic and highlights our theoretical advantage, we do empirically evaluate our approach on two additional real-world datasets that were used in other ICML, NIPS conferences. In the Wasserstein distance approximation experiment, we compare against a recent Siamese method from NeurIPS 2023, on a variety of datasets including a gene expression dataset (RNAseq) and an object point-cloud dataset ModelNet40. In the weight space alignment we compare against a Siamese method from ICML 2024, testing our performance on implicit neural representations (INRs) of MNIST and CIFAR10 image datasets.\\n\\n\\n[1] Deep Closest Point: Learning Representations for Point Cloud Registration, Wang and Solomon, ICCV 2019\\n\\n[2] Neural Graph Matching Network: Learning Lawler\\u2019s Quadratic Assignment Problem With Extension to Hypergraph and Multiple-Graph Matching\\nWang, Yan and Yang, TPAMI 2022\"}", "{\"title\": \"Answer to Reviewer\", \"comment\": \"We thank the reviewer for their thoughtful comments and time.\\n\\n**Weaknesses**:\\n\\u201cIn particular, an appropriate discussion on characterizing the more expressive layers of k-IGNs for is missing\\u201d.\\n\\n**Answer**\\nRegarding k-IGN for k>2, we currently do not know how to compute the irreducible decomposition in this scenario. \\n\\n**Questions**\\n\\n**Q1:** For this reason, would it be possible to add a brief discussion on the generalization of this methodology to strengthen the impact of this contribution and broaden its relevance to a wider community? See the following for more specific questions\\u201d.\\n\\n**A1:**\\nThanks for this great suggestion. We added the following discussion based on your questions to page 3: *\\u201cWe note that the cornerstones of this methodology: decomposition into irreducibles and Schur's Lemma, are applicable for all finite dimensional representations of finite groups (and also for compact infinite groups like $SO(d)$). The main challenge in this approach is characterizing and computing the decomposition into irreducibles. This needs to be done on a case to case basis. Much of the remainder of the paper will be devoted to computing these decompositions for important equivariant learning scenarios.\\u201d*\\n\\n**Q2**: Does this difficulty limit the methodology\\u2019s generalization? Are there similar technical challenges for characterizing k-IGN layers for k>2?\\u201d\\n\\n**A2:**\\nYes, in this approach the main challenge is characterizing the irreducible representations. Indeed this is what stops us from applying it to k-IGN for k>2. We know these irreducible exists, but we are still missing an algorithm to compute the decomposition. Note that, although characterization for k-IGN is difficult, characterization for DWS layers is very simple in our methodology in contrast to other methods that are very tedious and difficult to understand.\\n\\n**Q3:** Can this methodology be applied to other groups beyond S_N and wreath products? If so, could you briefly provide a few examples?\\n\\n**A3:**\\nGenerally, the irreducible-based method can be applied to all finite groups that are absolutely irreducible (see discussion in next question). It can also be applied to infinite compact groups like SO(3), which is actually a popular approach. Examples of papers following this approach are given in the related work section.\\n\\n**Q4:** Could the framework presented here be extend to these cases? What potential challenges do you envision in extending to non-absolutely irreducible representations?\\n\\n**A4:**\\nFor general real representations, we can still write the representation as a sum of irreducible representations, and there will still be no linear equivariant maps between non-isomorphic irreducibles. The difference is that the space of linear equivariant maps from an irreducible V to itself will be either\\none dimensional, {$\\\\lambda I| \\\\lambda \\\\in R$} as in the permutation case\\ntwo-dimensional, isomorphic to the complex numbers\\nfour-dimensional, isomorphic to the quaternions.\", \"this_is_explained_nicely_here_https\": \"//math.mit.edu/~poonen/715/real_representations.pdf\\nOne would then apriori need to check case by case what the space of isomorphisms is for each irreducible. An interesting alternative would be to use an automatic numerical method to find all equivariant layers between the irreducibles, as described in [Finzi et al. 2021]. Since the dimensions of the equivariant layers is at most four, the computational price of such an approach should be very reasonable.\", \"we_have_added_a_discussion_of_this_point_to_page_3_as_well\": \"\\u201cWe note that when V is not absolutely irreducible,\\nthe space of isomorphisms from V to W is either 2 or 4-dimensional (Poonen, 2016). In this setting\\n, using an automatic computational method to find all equivariant layers may be beneficial (Finzi et al. 2021).\\u201d\\n\\n**Q5:** Could you elaborate on the future directions for-IGNs presented in the conclusions (L537-539)?\\u201d\\n\\n**A5:**\\nYes. The standard k-IGN framework consider equivariant maps between tensor representations, which are of dimension $n,n^2,n^3,...$ The irreducible representation framework shows that, e.g. the $n^2$ dimensional matrix space from 2-IGN can be decomposed into 7 irreducible subspaces: two are 1 dimensional, three are $n-1$ dimensional, and the remaining two are approximately $n^2/2$ dimensional. One could then consider equivariant maps based on this decomposition, with a different number of hidden features coming from each one of the irreducible representations. e.g., from a computational perspective, it may make sense to take more features from the $n-1$ dimensional representations and fewer features from the $n^2/2$ dimensional representations. Similar ideas could be applied to k-IGN (but this would first require characterizing the irreducibles which is also future work). \\n\\n**Minor Issues (No Impact on Recommendation)...**\\n\\nThanks, we incorporated all your suggestions in the revised manuscript.\"}", "{\"summary\": \"The paper considers the problem of constructing linear equivariant layers for groups acting (linearly) on input and output spaces. Specifically, it proposes to exploit the decomposition into irreducible group representations and then appealing to Schur\\u2019s Lemma, which reduces the problem to choosing coefficients for pairs of isomorphic representations. Several specific instances are analyzed, such as permutation groups in the context of graph neural networks, groups acting on weights of deep networks, and wreath products acting on products of representations.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"2\", \"strengths\": \"-The paper is exceptionally well written. The language is clear and concise, the sections are structured, and the mathematical formalism/notation is elegant.\\n\\n-The problem considered is a fundamental one in machine learning literature. Constructing (linear) equivariant maps lies at the heart of geometric deep learning, which has been successful in several applications. \\n\\n-The proposed solution is general, as it applies, in principle, to any input/output group representation. Several existing frameworks are phrased under the same paradigm, contributing with structure and clarity to the geometric deep learning literature.\", \"weaknesses\": \"I believe that the proposed approach via Schur\\u2019s Lemma comes with disadvantages. To begin with, using Schur\\u2019s Lemma to construct equivariant linear maps is not novel in the geometric deep learning community. It is a rather well-known technique \\u2013 see, for example, Behboodi et al., Section 3.2. This is a major concern, since Schur\\u2019s Lemma represents a core point of this work; the other contributions amount to rephrasings of known frameworks from the literature under the lenses of Schur\\u2019s Lemma. Moreover, Schur\\u2019s Lemma has some restrictions. First, it requires the decomposition into irreducible representations to be known a priori, which is not always the case. Such decomposition is challenging to compute algorithmically for general groups and representations. Second, Schur\\u2019s Lemma applies naively only to complex representations (i.e., over $\\\\mathbb{C}$). As the authors mention, this is not an issue for permutation groups (appendix B), but it can be for other groups. It is still possible to apply Schur\\u2019s Lemma to arbitrary real representations of arbitrary groups, but this involves subtleties \\u2013 see Behboodi et al., Section 8.\\n\\nI also find the experimental section rather weak. The experiments reported only consider ideal equivariant tasks, i.e., scenarios where the ground-truth function is equivariant. The experimental results show that adding equivariant layers to the network improves (generalization) performance, as compared to non-equivariant architectures. This is not surprising, since in these cases the inductive bias given by equivariance aligns perfectly with the structure of the task. In typical real-world scenarios (e.g., image classification), the (highly-noisy) ground-truth function is instead not exactly equivariant, or it is not equivariant on all the input data. In my opinion, it would be more informative and less trivial to test the models on these types of real-world tasks. The equivariance bias is often still beneficial in terms of generalization \\u2013 as works in geometric deep learning have extensively shown \\u2013 but empirical investigations are required to assess this carefully.\", \"minor_typos\": \"-The paragraph title on line 86 is not capitalized, while the one on line 100 is. \\n\\n-The tables in section 6 exceed the margins of the paper.\\n\\n\\nBehboodi et al., A PAC-Bayesian Generalization Bound for Equivariant Networks, NeurIPS 2022.\", \"questions\": \"I would like the authors to comment on the above points regarding novelty and significance of experiments.\\n\\nMy current opinion is that the work is exceptionally well-written, and bears several contributions to the geometric deep learning literature. However, I am concerned with the novelty and significance, as outlined above. Still, I am leaning towards accepting the paper, but would like to hear from the authors about my points of criticism.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This paper derives the existing equivariant models of Deep Sets, 2-IGNs, and Deep Weight Space networks, in terms of irreducible representations and Schur's lemma. The concept of the paper is interesting, and the theoretical contribution delivers a mathematical approach to understanding existing machine learning architectures that a priori could seem ad hoc. However, the reviewers raised valid concerns regarding the applicability of the approach in general (Schur's lemma holds over $\\\\mathbb C$, and the approach is only described for permutation groups for which the irreps are simple to compute). They also raised concerns about the experimental evaluation. However, using standard mathematical tools to explain things that could seem an otherwise arbitrary construction is useful, so the positives outweigh the negatives.\", \"additional_comments_on_reviewer_discussion\": \"All but one reviewer found the paper marginally above the threshold after the discussion period. The most negative reviewer voted to reject (3) but did not engage in the discussion, so after a conversation with the senior area chair, we decided to discard this review.\"}", "{\"summary\": \"The paper introduces an alternative approach for characterizing equivariant linear layers in neural networks that process permutation and related group representations. The paper derives a simpler method for obtaining existing models such as DeepSets, 2-IGN, and Deep Weight Space networks, based on irreducible representations and Schur\\u2019s lemma. The proposed framework also considers unaligned symmetric sets, that build upon equivariance to the wreath product of groups.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The paper introduces a fresh perspective on equivariant layer characterization by applying irreducible representations and Schur\\u2019s lemma to obtain simplified derivations of established models, such as DeepSets, 2-IGN, and Deep Weight Space (DWS) networks.\\n\\n2. The theoretical foundations are well-developed. The work provides a complete characterization of equivariant layers in the context of unaligned symmetric sets, which is an interesting theoretical contribution.\", \"weaknesses\": \"1. The presentation and flow of the paper could be improved. The claims and results are challenging to follow, which may limit the broader audience\\u2019s ability to appreciate the work.\\n\\n2. The paper\\u2019s contributions lack clarity. The paper offers an irreducible-based derivation for existing results and characterizes equivariant functions on unaligned symmetric elements, but the impact and relevance of these contributions remain unclear. It is not evident how these results benefit the design of novel architectures or enhance our understanding of current ones. This limits the significance of the work and may fall short of ICLR\\u2019s standards.\\n\\n3. The empirical evaluation is limited, and the results are not compelling. Using synthetic data for anomaly detection does not sufficiently demonstrate the method\\u2019s practical applicability, as the task is relatively unchallenging and does not show the strengths of the proposed approach.\", \"questions\": \"Please see Weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"I am grateful to the authors for their reply.\\n\\nI acknowledge that the contribution on wreath-equivariant layers is worthy of interest, and I will therefore keep my (positive) score.\"}", "{\"title\": \"For all reviewers\", \"comment\": \"We\\u2019d like to thank the reviewers for taking the time to review the paper and their constructive comment. We were happy to see that for the most part the reviewers were positive about the paper, and felt that it was \\u201cexceptionally well-written, and bears several contributions to the geometric deep learning literature\\u201d.\\n\\nThere were several points that came up in several reviews, which we would like to clarify:\\n* Some of the reviewers claimed that \\u201cthe contributions amount to rephrasings of known frameworks from the literature under the lenses of Schur\\u2019s Lemma.\\u201d We\\u2019d like to emphasize that this is not the case. Section 5 provides a characterization of wreath-equivariant linear layers which is completely new. This is, in our view, an important result. Previous work [Wang et al., Neurips 2020] devoted solely to this problem have only focused on the special case where the permutation action is transitive, and suggested a very small number of non-siamese layers. We give a complete characterization in a much more general setting, and show that in several cases there is a vast number of non-siamese layers. We also experimentally show the relevance of these results for learning Wasserstein distances and aligning deep weights spaces. \\nWe also feel the contribution in Section 4, which is indeed dedicated to derivation of known results from the irreducible perspective, will also be valuable to the community. In particular, our derivation of Deep Weight Space layers is substantially simpler than previous methods, and we believe this will be helpful for researchers working on improving architectures in this emerging topic.\\n\\n* Several reviewers remarked on the fact that we do not include a derivation of k-IGN for k>2. Indeed we currently cannot do this. We have also not invested much energy into this question because the characterization of k-IGN layers using parameter sharing is very elegant. In contrast, the advantage of our approach is more apparent for Deep Weight Spaces or Wreath equivariant structures. \\n\\n* There were some remarks on the lack of \\u201creal world experiments\\u201d. In this context we\\u2019d like to emphasize that we consider the results in Table 2 and Table 3 \\u201creal world tasks\\u201d. They address the problem of computing Wasserstein distances, and aligning neural weights spaces, which are important and relevant topics, and the datasets and competing siamese methods come from recent successful submissions [Chen and Wang, Neurips 2023] and [Navon et al. ICML 2024]. We believe the magnitude of the experimental section is adequate for a theory based paper like ours. \\n\\nWe have uploaded a revised version of our manuscript, addressing your comments. Changes from the submitted version are marked in blue.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"Answer to Reviewer\", \"comment\": \"We thank the reviewer for their thoughtful comments and time.\\n\\n**W1:** *I believe that the proposed approach via Schur\\u2019s Lemma comes with disadvantages. To begin with, using Schur\\u2019s Lemma to construct equivariant linear maps is not novel in the geometric deep learning community. It is a rather well-known technique \\u2013 see, for example, Behboodi et al., Section 3.2.*\\n\\n**A1:**\\nWe do not dispute this claim. Indeed, in the related work section, we have devoted a paragraph to reviewing works that have employed this approach. The point of this paper is to apply the irreducible representation methodology to important equivariant learning scenarios where it has not yet been applied and show that this approach yields some benefits over the more commonly used parameter-sharing analysis.\\n\\n**W2:**\\n*\\u201cThis is a major concern, since Schur\\u2019s Lemma represents a core point of this work; the other contributions amount to rephrasings of known frameworks from the literature under the lenses of Schur\\u2019s Lemma.\\u201d*\\n\\n**A2:** We disagree with this claim. Section 5 offers a novel characterization of wreath-equivariant linear layers, a key result. Unlike [Wang et al., Neurips 2020], which focused only on transitive permutation actions with few non-siamese layers, we provide a complete, general characterization, showing many non-siamese layers exist. Experiments confirm their relevance for learning Wasserstein distances and aligning deep weight spaces.\\nThe first part of the paper also adds value by simplifying the derivation of DWS layers, previously tedious. This clarity benefits researchers improving architectures in this emerging field.\\n\\n**W3:**\\n*Moreover, Schur\\u2019s Lemma has some restrictions. First, it requires the decomposition into irreducible representations to be known a priori, which is not always the case. Such decomposition is challenging to compute algorithmically for general groups and representations. Second, Schur\\u2019s Lemma applies naively only to complex representations . As the authors mention, this is not an issue for permutation groups (appendix B), but it can be for other groups. It is still possible to apply Schur\\u2019s Lemma to arbitrary real representations of arbitrary groups, but this involves subtleties \\u2013 see Behboodi et al., Section 8.*\\n\\n**A3:**\\nWe agree with all the facts stated in this paragraph. Using irreducible representations to characterize linear layers has advantages and disadvantages. You summarized them well. The point of this manuscript is to show the benefits of this approach for some permutation actions where this approach is not typically used. In particular, (1) we get much simpler derivations of the DWS layers, and (2) we derive all wreath equivariant layers, which was not done previously. \\n\\n**Q4:**\\n*\\u201cI also find the experimental section rather weak. The experiments reported only consider ideal equivariant tasks, i.e., scenarios where the ground-truth function is equivariant. The experimental results show that adding equivariant layers to the network improves (generalization) performance as compared to non-equivariant architectures. This is not surprising since in these cases, the inductive bias given by equivariance aligns perfectly with the structure of the task. In typical real-world scenarios (e.g., image classification), the (highly noisy) ground-truth function is instead not exactly equivariant, or it is not equivariant on all the input data. In my opinion, it would be more informative and less trivial to test the models on these types of real-world tasks. The equivariance bias is often still beneficial in terms of generalization \\u2013 as works in geometric deep learning have extensively shown \\u2013 but empirical investigations are required to assess this carefully.\\u201d*\\n\\n**A4:**\\nIt\\u2019s possible that we didn\\u2019t explain ourselves well, but the baselines are some Siamese models that are also equivariant to wreath products. The difference is that our method includes all equivariant layers and, hence, is more expressive and leads to better results in practice.\", \"regarding_equivariance\": \"in our view, equivariant is often exact and not approximate, even with image classification: e.g., given a noisy image of a motorbike, a rotated image of the image will stil be an image of a motorbike.\\n\\nRegarding \\u201creal world tasks\\u201d: we consider the results in Table 2 and Table 3 \\u201creal world tasks\\u201d. They address the problem of computing Wasserstein distances, and aligning neural weights spaces, which are important and relevant topics. The datasets and competing siamese methods come from recent Neurips/ICML submissions [Chen and Wang, Neurips 2023] and [Navon et al. ICML 2024]. They involve computing Wasserstein distances on 3D models from ModelNet40 and for RNA benchmarks and aligning neural networks trained on MNIST and CIFAR.\\n\\n **Minor Typos:**\\n*The paragraph title on line 86 is not capitalized, while the one on line 100 is.*\\n*The tables in section 6 exceed the margins of the paper.*\\n\\n**A** fixed\"}", "{\"comment\": \"Dear reviewer, have our answers addressed your concerns? We're looking forward to your feedback.\"}", "{\"title\": \"Answer to Reviewer\", \"comment\": \"We thank the reviewer for their time and thoughtful comments.\\n\\n**Weaknesses**\\n\\n**W1:**\\nWhile the new derivations align with original methods, the resulting models are not new. The concept of ``irreducible representation'' is also well studied, so the contribution of the paper lies mainly in bridging two topics, which is interesting but natural.\\n\\n**A1:**\\nWe do not agree with this description. Firstly, our results and model for sets of unaligned symmetric elements (Section 5) are completely new. Secondly, also the theoretical results which are a new derivation of existing models require a non-trivial analysis of the problem and are not all immediate. The generally known fact is that irreducible representations exist and that if the decomposition to irreducibles is known, it can be used to characterize all linear equivariant layers via Schur\\u2019s lemma. However, the decomposition into irreducibles for the examples we discussed was unknown.\\n\\n**W2:** In particular for equivariant graph layers, the authors only provide derivations for 2-IGN. As admitted in the limitation section, the paper does not involve higher-order-IGN. The author should explain whether their method is broadly applicable for these networks based on tensor representations, or need case-by-case derivations\\u201d\\n\\n**A2:**\\nCurrently, our analysis does not support higher order IGNs. We know theoretically that the tensor representations can be decomposed into irreducibles, but we currently do not know how to compute this decomposition. We have not invested much energy into this question because the characterization of k-IGN layers using parameter sharing is very elegant. In contrast, the advantage of our approach is more apparent for Deep Weight Spaces or Wreath equivariant structures. \\n\\n**W3:** Although this is a theoretical paper, the experiments could be improved. More baselines and more real-world tasks are strongly encouraged\\n\\n**A3:** We include three experiments in the paper: one indeed synthetic experiment on graph anomaly detection, and two experiments improving upon recent successful Siamese-based methods for (a) computing Wasserstein distance (Chen and Wang, Neurips 2023) and (b) aligning weight spaces (Navon et al, ICML 2024). We believe the experimental part is on par, or more extensive than, what is common in similar ICLR theoretical papers.\\n\\n**Questions:**\\n\\n**Q1:** Can the method be generalized to higher-order-IGN in a principled manner?\\n\\n**A1:**\\n Currently, our work doesn\\u2019t generalize to higher order-IGN (discussed more above in answer to **W2**) \\n\\n**Q2:**\\n*Can you briefly describe the claim that ``using irreducibles could lead to new equivariant models with intermediate irreducible features of lower dimensions''?\\u201d*\\n\\n**A2:** The standard k-IGN framework consider equivariant maps between tensor representations, which are of dimension $n,n^2,n^3,...$ The irreducible representation framework shows that e.g., the $n^2$ dimensional matrix space from 2-IGN can be decomposed into 7 irreducible subspaces: two are 1 dimensional, three are n-1 dimensional, and the remaining two are approximately $n^2/2$ dimensional. One could then consider equivariant maps based on this decomposition, with a different number of hidden features coming from each one of the irreducible representations. e.g., from a computational perspective, it may make sense to take more features from the $n-1$ dimensional representations and fewer features from the $n^2/2$ dimensional representations. Similar ideas could be used for general k-IGN (once the decomposition is computed)\\n\\n**Q3:** Can you conduct more experiments on real-world and large-scale datasets, and include more baseline? In addition, can you intuitively explain why non-Siamese layers help in these tasks?\\n\\n**A3:**\\nWe were not able to add more tasks in the time allotted for the rebuttal. We think that the current scope of experiments is on par, or more extensive than, what is common in similar ICLR theoretical papers.\"}" ] }
4v4RcAODj9
DUALFormer: Dual Graph Transformer
[ "Jiaming Zhuo", "Yuwei Liu", "Yintong Lu", "Ziyi Ma", "Kun Fu", "Chuan Wang", "Yuanfang Guo", "Zhen Wang", "Xiaochun Cao", "Liang Yang" ]
Graph Transformers (GTs), adept at capturing the locality and globality of graphs, have shown promising potential in node classification tasks. Most state-of-the-art GTs succeed through integrating local Graph Neural Networks (GNNs) with their global Self-Attention (SA) modules to enhance structural awareness. Nonetheless, this architecture faces limitations arising from scalability challenges and the trade-off between capturing local and global information. On the one hand, the quadratic complexity associated with the SA modules poses a significant challenge for many GTs, particularly when scaling them to large-scale graphs. Numerous GTs necessitated a compromise, relinquishing certain aspects of their expressivity to garner computational efficiency. On the other hand, GTs face challenges in maintaining detailed local structural information while capturing long-range dependencies. As a result, they typically require significant computational costs to balance the local and global expressivity. To address these limitations, this paper introduces a novel GT architecture, dubbed DUALFormer, featuring a dual-dimensional design of its GNN and SA modules. Leveraging approximation theory from Linearized Transformers and treating the query as the surrogate representation of node features, DUALFormer \emph{efficiently} performs the computationally intensive global SA module on feature dimensions. Furthermore, by such a separation of local and global modules into dual dimensions, DUALFormer achieves a natural balance between local and global expressivity. In theory, DUALFormer can reduce intra-class variance, thereby enhancing the discriminability of node representations. Extensive experiments on eleven real-world datasets demonstrate its effectiveness and efficiency over existing state-of-the-art GTs.
[ "Graph Transformers", "Node Classification" ]
Accept (Poster)
https://openreview.net/pdf?id=4v4RcAODj9
https://openreview.net/forum?id=4v4RcAODj9
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yAoYb2dig7", "y4jVzyVo9t", "tsRHsba084", "tZCu0UUUDY", "rTOIrA7lD9", "qIy2pUFNWI", "ovn9f7Z4GP", "hIOdOlSxG1", "enVuZ9wH4G", "biTY5KTTIs", "WggRFVMqiC", "VAx85S5jNm", "U6o6X3hJVv", "Q3WeOIK3nw", "NWMRffGJf6", "MjzBdBaQqw", "EKcOPCg0ZX", "DWNSGBRVyg", "D4CTj70qmX", "Cv07xJPvIa", "2DfkmIDvNf", "1KfFuEBxej" ], "note_type": [ "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "meta_review" ], "note_created": [ 1730471772445, 1731661126736, 1732373533087, 1732678490495, 1730007132725, 1731660942221, 1731660975564, 1732255495426, 1737523933489, 1731771633869, 1732363589987, 1731810418393, 1732075185535, 1731674240494, 1732086257648, 1731661040780, 1730559452675, 1731748227047, 1731661182772, 1729740700811, 1731660385386, 1734530241812 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission8805/Reviewer_CPJ4" ], [ "ICLR.cc/2025/Conference/Submission8805/Authors" ], [ "ICLR.cc/2025/Conference/Submission8805/Reviewer_GjDD" ], [ "ICLR.cc/2025/Conference/Submission8805/Authors" ], [ "ICLR.cc/2025/Conference/Submission8805/Reviewer_Q319" ], [ "ICLR.cc/2025/Conference/Submission8805/Authors" ], [ "ICLR.cc/2025/Conference/Submission8805/Authors" ], [ "ICLR.cc/2025/Conference/Submission8805/Reviewer_CPJ4" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission8805/Reviewer_GjDD" ], [ "ICLR.cc/2025/Conference/Submission8805/Reviewer_Q319" ], [ "ICLR.cc/2025/Conference/Submission8805/Reviewer_YCph" ], [ "ICLR.cc/2025/Conference/Submission8805/Authors" ], [ "ICLR.cc/2025/Conference/Submission8805/Reviewer_GjDD" ], [ "ICLR.cc/2025/Conference/Submission8805/Authors" ], [ "ICLR.cc/2025/Conference/Submission8805/Authors" ], [ "ICLR.cc/2025/Conference/Submission8805/Reviewer_YCph" ], [ "ICLR.cc/2025/Conference/Submission8805/Authors" ], [ "ICLR.cc/2025/Conference/Submission8805/Authors" ], [ "ICLR.cc/2025/Conference/Submission8805/Reviewer_GjDD" ], [ "ICLR.cc/2025/Conference/Submission8805/Authors" ], [ "ICLR.cc/2025/Conference/Submission8805/Area_Chair_C352" ] ], "structured_content_str": [ "{\"summary\": \"To address the scalability limitations of graph transformers (GTs) and the challenge of balancing local and global information, this paper introduces DualFormer, a novel GT architecture. DualFormer calculates global attention along the feature dimension, enabling the model to perform effectively and efficiently on large graphs while maintaining strong performance.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The writing is generally clear and accessible, making the paper readable and easy to follow.\", \"The proposed method is both understandable and implementable, yet effective. It performs well on several datasets.\", \"The paper includes diverse experimental analyses, such as node classification, node property prediction, ablation studies, and parameter sensitivity analyses. Furthermore, the authors offer theoretical guarantees to support the method.\"], \"weaknesses\": \"- The motivation for the study is not fully convincing. Further details are provided in the questions below.\\n- Since the paper emphasizes the method\\u2019s scalability, additional experiments on larger graphs would reinforce this claim. Suggested datasets include *Roman-Empire*, *Question[1]*, *Wiki*, and *ogbn-papers100M*. Moreover, the GNN baselines in Tables 2 and 3 are outdated, which may reduce the persuasiveness of the results. For instance, the statement, \\u201cMost GTs consistently show superior performance over GNNs across all datasets\\u201d (line 451), would be more convincing if compared with recent GNN baselines, such as *ChebNetII[2]* and *OptBasis[3]*, to present a more comprehensive evaluation.\\n- Minor Issues: There are a few typographical errors, such as \\\"abov\\\" (line 182). Consistent notation throughout the paper is also preferable. For instance, in line 168, there is a \\\"$\\\\times$\\\" symbol between a scalar and a matrix, but not in line 216. Additionally, line 191 includes a \\\"$\\\\cdot$\\\" between matrices, whereas line 167 does not.\\n\\n[1] A critical look at the evaluation of GNNs under heterophily: Are we really making progress? In ICLR 2023.\\n\\n[2] Convolutional Neural Networks on Graphs with Chebyshev Approximation, Revisited. In NeurIPS 2022.\\n\\n[3] Graph Neural Networks with Learnable and Optimal Polynomial Bases. In ICML 2023.\", \"questions\": [\"The first question concerns the reasonableness of applying softmax to the global correlations between features.\", \"In standard self-attention, $ \\\\mathbf{O} = \\\\exp(\\\\text{sim}(\\\\mathbf{Q}, \\\\mathbf{K}))\\\\mathbf{V} $ (Eq. 6).\", \"Through linearized attention, $ \\\\mathbf{O} = \\\\phi(\\\\mathbf{Q}) \\\\phi(\\\\mathbf{K})^\\\\top \\\\mathbf{V} $ (Eq. 11), where each element in $ \\\\phi(\\\\mathbf{Q}) \\\\phi(\\\\mathbf{K})^\\\\top $ is non-negative, representing attention weights (global dependencies between nodes).\", \"By the commutative property of matrix multiplication, $ \\\\mathbf{O} = \\\\phi(\\\\mathbf{Q}) (\\\\phi(\\\\mathbf{K})^\\\\top \\\\mathbf{V}) $, so we can interpret $ (\\\\phi(\\\\mathbf{K})^\\\\top \\\\mathbf{V}) $ as a correlation matrix (with elements that can be positive or negative).\", \"However, in Eq. 13, $ \\\\mathbf{V} \\\\text{softmax}(\\\\mathbf{Q}^\\\\top \\\\mathbf{K}) $, i.e., $ \\\\mathbf{Q} \\\\text{softmax}(\\\\mathbf{K}^\\\\top \\\\mathbf{V}) $, differs from $ \\\\phi(\\\\mathbf{Q}) (\\\\phi(\\\\mathbf{K})^\\\\top \\\\mathbf{V}) $ because elements in $\\\\text{softmax}(\\\\mathbf{K}^\\\\top \\\\mathbf{V}) $ are all non-negative, unlike those in $ (\\\\phi(\\\\mathbf{K})^\\\\top \\\\mathbf{V})$. Could you clarify these differences and explain why it is reasonable to replace $ \\\\phi(\\\\mathbf{Q}) (\\\\phi(\\\\mathbf{K})^\\\\top \\\\mathbf{V}) $ with $ \\\\mathbf{Q} \\\\text{softmax}(\\\\mathbf{K}^\\\\top \\\\mathbf{V}) $?\", \"The second question pertains to the interpretation of the proposed global attention. The method appears to aggregate information along the feature dimension, unlike previous approaches that gather global information across all or most nodes in a graph. For a one-dimensional feature, $ \\\\mathbf{V} \\\\text{softmax}(\\\\mathbf{Q} \\\\mathbf{K}^T) $ in Eq. 13 reduces to $ \\\\mathbf{V} \\\\cdot \\\\alpha $, where $ \\\\alpha $ is a scalar and $ \\\\mathbf{V} \\\\in \\\\mathbb{R}^{n} $. How can this be understood as gathering information from a global perspective?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer GjDD (Part 1)\", \"comment\": \">Q1. The key contributions of the proposed method are not clear.\\n\\nR1. The key contribution of the proposed DUALFormer is to introduce self-attention on the feature dimension. Although the design is simple, it possesses the following three excellent characteristics.\\n\\n1) **A scalable self-attention**. Due to the quadratic computational complexity of their self-attention to the node dimension, Vanilla GTs often encounter scalability issues. In contrast, self-attention in DUALFormer operates efficiently, with complexity linearly related to the size of the graph. It is designed to capture inter-feature correlations to approximate global inter-node dependencies. As a result, it is potentially scalable to large-scale graphs. \\n\\n2) **Improvement of discriminability**. Rigorous theoretical analysis demonstrates the rationality behind this design of self-attention on a novel dimension in improving the discriminability of node representations. \\n\\n3) **Comprehensive expressivity**. Due to the global self-attention operating on the feature dimension, it seamlessly integrates with the local GNN module without compromising their expressivity. Therefore, DUALFormer achieves an automatic trade-off between local and global expressivity. \\n \\nFurthermore, the proposed DUALFormer has achieved **state-of-the-art** performances on many tasks, including node classification and node property prediction. \\n \\n\\n ---\\n>Q2. As the authors claim in Eq. 13, the proposed method only captures the feature-to-feature correlations. In my opinion, it is not the global information on the graph since it is unable to capture the relations between nodes. Why do authors claim the proposed method can capture the global information on the graph?\\n\\nR2. The capability of capturing global information stems from the approximate equivalence between $\\\\operatorname{softmax}(\\\\mathbf{Q}\\\\mathbf{K}^T)\\\\mathbf{V}$ and $\\\\mathbf{Q}(\\\\mathbf{K}^T\\\\mathbf{V})$. The global information is captured by the attention between nodes, i.e., $\\\\operatorname{softmax}(\\\\mathbf{Q}\\\\mathbf{K}^T)$, in previous graph transformers. According to the combination law for matrix multiplication, it holds that $(\\\\mathbf{Q}\\\\mathbf{K}^T)\\\\mathbf{V} = \\\\mathbf{Q}(\\\\mathbf{K}^T\\\\mathbf{V})$ and $\\\\operatorname{softmax}(\\\\mathbf{Q}\\\\mathbf{K}^T)\\\\mathbf{V} \\\\approx \\\\mathbf{Q}(\\\\mathbf{K}^T\\\\mathbf{V})$. Thus, this paper tends to approximate the expensive node-node attention $(\\\\mathbf{Q}\\\\mathbf{K}^T)$ via efficient feature-feature attention $(\\\\mathbf{K}^T\\\\mathbf{V})$ since $\\\\operatorname{softmax}(\\\\mathbf{Q}\\\\mathbf{K}^T)\\\\mathbf{V} \\\\approx \\\\mathbf{Q}(\\\\mathbf{K}^T\\\\mathbf{V})$. Thus, the proposed DUALFormer can capture global information. \\n\\n\\n---\\n>Q3. The authors claim that the computational complexity of the proposed method is $O(n)$, which is obviously wrong. Based on Eq. 14, the calculation involves the adjacency matrix. Hence, the computational complexity of this part is $O(E)$, and it cannot be ignored since $|E| > |N|$(even $|E| \\\\gg |N|$ on some graphs). \\n\\nR3. Thanks for pointing out this error. In the previous version, only the complexities of self-attention in **ALL** GTs are considered. We will add the time complexity of GNNs to the time complexity of corresponding methods, including GraphTrans, SAT, GraphGPS, NodeFormer, NAGphormer, Exphormer, GOAT, SGFormer, Polynormer, GoBFormer, and the proposed DUALFormer. The adjusted time complexity is shown in the following table, where $e$ denotes the number of edges. \\n\\n| |GraphTrans| SAT| GraphGPS | NodeFormer | NAGphormer | Exphormer | GOAT | SGFormer | Polynormer | GoBFormer| DUALFormer\\n|:--------:|:--------:|:--------:|:--------:| :---------:|:--------:|:--------:| :--------:|:--------:| :--------:| :---------:|:--------:|\\n|Pre-processing | - | $O(n^3)$ | $O(n^3)$ | - | $O(n^3+e)$ | $O(n^3)$ | $O(nlogn)$ | - | - | $O(nlogn)$ | -|\\n| Training | $O(n^2+e)$ | $O(n^2+e)$ | $O(n+e)$ | $O(n+e)$ | $O(n)$| $O(n+e)$ | $O(n+e)$| $O(n+e)$| $O(n+e)$| $O(n^{\\\\frac{4}{3}}+e)$ |$O(n+e)$| \\n\\nThe table reveals that the computational complexity of the proposed DUALFormer is linearly proportional to the number of nodes and edges, demonstrating its efficiency. Note that the complexity of DUALFormer aligns with those of the existing scalable graph transformers, such as NodeFormer, SGFormer, and Polynormer, while it does NOT require complicated preprocessing. This highlights the scalability of the proposed DUALFormer. Therefore, the conclusion of the high scalability of the proposed DUALFormer is not changed.\"}", "{\"comment\": \"Thanks for your response. I have carefully read the response and the revised manuscript. I think the authors have addressed my concerns. Hence, I raise my score to 6.\"}", "{\"comment\": \"Thanks for your insightful feedback; it has significantly enhanced the quality of our paper. We have carefully answered your concerns and made the necessary revisions to the manuscript. Please let us know if you have any further questions. We are more than willing to provide explanations or clarification to ensure a thorough understanding of our paper.\"}", "{\"summary\": \"This paper introduces DUALFormer, a graph transformer that tackles the challenges of the scalability and trade-off between local and global expressivity faced by current models. The motivation is to model the global dependencies among nodes by approximately characterizing the correlations between features. DUALFormer adopts a simple, intuitive design that includes local graph convolutional networks operating on the node dimension and a global self-attention mechanism operating on the feature dimension. The effectiveness and efficiency of the proposed DUALFormer are demonstrated in experimental evaluations across node classification and node property prediction tasks.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"1) The motivation for the dual design of local and global modules in this paper is clear and interesting.\\n2) The model DUALFormer is simple and efficient with a solid theoretical foundation. \\n3) The paper offers extensive experimental validation across various datasets. \\n4) The paper is well-organized and easy to read.\", \"weaknesses\": \"1) The paper has some minor errors that need fixing. For example, Table 2 misses the mean value for the GraphGPS model on the Citeseer dataset.\\n2) To enhance readability, Equation 13 should be split into two or three equations. \\n3) The model DUALFormer places the GNN layers, such as the SGC layers, after the attention layers. What is the rationale behind this design? Is it possible to reverse this order? \\n4) Figure 4 shows that the model utilizing APPNP outperforms the one using SGC in the Cora and Pubmed datasets. What accounts for this performance difference?\\n5) The effect of certain hyper-parameters, such as the parameter $\\\\alpha$ in Equation 13, on performance has yet to be unverified. \\n6) The paper does not mention any plans to open-source the code.\\n\\n* Update after carefully reviewing the authors' responses: The authors have provided detailed and thoughtful replies that effectively address most of my concerns. At this stage, I am pleased to increase my evaluation of the paper to '8: accept, good paper'.\", \"questions\": \"Update after carefully reviewing the authors' responses: no further concerns\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer CPJ4 (Part 1)\", \"comment\": \"> Q1. Suggested datasets include Roman-Empire, Question[1], Wiki, and ogbn-papers100M.\\n\\nR1. According to your suggestion, we have conducted model comparisons on the Roman-Empire, Question, and ogbn-papers100M datasets. For the Roman-Empire and Questions datasets, the data partitioning follows the scheme from [1], specifically, a 50/25/25 split for training, validation, and testing. For the ogbn-papers100M, the split ratio is public split [2], namely 78/8/14. The statistics of the dataset and the experimental results are shown in the following table. \\n\\n| | Roman-Empire | Question | ogbn-papers100M |\\n|:--------:|:--------:| :---------:|:--------:| \\n|#Nodes| 22,662 | 48,921 | 111,059,956 | \\n|#Edges| 32,927 | 153,540 | 1,615,685,872 | \\n|#Attributes| 300 | 301 | 128 | \\n|#Classes| 18 | 2 | 172 | \\n| NAGphormer | 74.45$_{\\u00b10.48}$ | 75.13$_{\\u00b10.70}$ | - | \\n| GOAT | 72.30$_{\\u00b10.48}$ | 75.95$_{\\u00b11.38}$ | - | \\n| SGFormer | 73.91$_{\\u00b10.79}$ | 77.06$_{\\u00b11.20}$ | 66.01$_{\\u00b10.37}$ | \\n| DUALFormer | 77.31$_{\\u00b10.17}$ | 78.62$_{\\u00b10.56}$ | 67.59$_{\\u00b10.28}$ | \\n\\nThe table reveals that, in comparison to the baselines, our proposed DUALFormer achieves consistently performance advantages on all three datasets, underscoring its superiority and scalability. \\n\\n[1] A critical look at the evaluation of GNNs under heterophily: Are we really making progress? ICLR 2023\\n[2] Open Graph Benchmark: Datasets for Machine Learning on Graphs. NeurIPS 2020\\n\\n---\\n>Q2. The statement, \\u201cMost GTs consistently show superior performance over GNNs across all datasets\\u201d (line 451), would be more convincing if compared with recent GNN baselines, such as ChebNetII[2] and OptBasis[3]. \\n\\nR2. Thank you for pointing out the imprecise description. The correct description would be: \\\"Most GTs consistently show superior performance over **the backbone** GNNs, which typically are GCN and GAT, across all datasets.\\u201d Based on your advice, we further compare the proposed DUALFormer with these two recent GNN baselines, namely ChebNetII and OptBasis, on five datasets. As can be seen from the following table on five datasets, the proposed DUALFormer consistently outperforms the baseline GNNs. This underscores the effectiveness of DUALFormer. \\n\\n| | Roman-Empire | Question | ogbn-papers100M | pokec| ogbn-arxiv\\n|:--------:|:--------:| :---------:|:--------:| :--------:| :--------:| \\n| ChebNetII | 74.64$_{\\u00b10.39}$ | 74.41$_{\\u00b10.58}$ | 67.18$_{\\u00b10.32}$ | 82.33$_{\\u00b1 0.28}$| 72.32$_{\\u00b10.23}$|\\n| OptBasisGNN | 76.91$_{\\u00b10.37}$ | 73.82$_{\\u00b10.83}$ | 67.22$_{\\u00b10.15}$ | 82.83$_{\\u00b10.04}$ | 72.27$_{\\u00b1 0.15}$|\\n| DUALFormer | 77.31$_{\\u00b10.17}$ | 78.62$_{\\u00b10.56}$ | 67.59$_{\\u00b10.28}$ | 82.97$_{\\u00b10.43}$| 73.71$_{\\u00b10.22}$|\\n\\n---\\n>Q3. There are a few typographical errors.\\n\\nR3. Thanks for your careful review. We will meticulously check the manuscript to ensure all errors are corrected.\"}", "{\"title\": \"Response to Reviewer CPJ4 (Part 2)\", \"comment\": \">Q4. The first question concerns the reasonableness of applying softmax to the global correlations between features. Could you clarify these differences and explain why it is reasonable to replace $\\\\phi(\\\\mathbf{Q})(\\\\phi(\\\\mathbf{K})^{\\\\top}\\\\mathbf{V})$ with $\\\\mathbf{Q}\\\\operatorname{softmax}(\\\\mathbf{K}^{\\\\top}\\\\mathbf{V})$?\\n\\nR4. The introducted softmax is just a implementation strategy, while the obvious equivalence $\\\\phi(\\\\mathbf{Q})(\\\\phi(\\\\mathbf{K})^{\\\\top}\\\\mathbf{V}) = (\\\\phi(\\\\mathbf{Q})\\\\phi(\\\\mathbf{K})^{\\\\top})\\\\mathbf{V}$ is the key point we want to emphasize. This equivalence motivates the additional transformer on feature dimension and the proposed DUALFormer. To demonstrate the ignorability of softmax, we conduct an ablation study on the impact of softmax with results shown in the following tables. It illustrate that we can employ $\\\\phi(\\\\mathbf{Q})(\\\\phi(\\\\mathbf{K})^{\\\\top}\\\\mathbf{V}) $, which is equivalent to $(\\\\phi(\\\\mathbf{Q})\\\\phi(\\\\mathbf{K})^{\\\\top})\\\\mathbf{V}$. We sincerely apologize for any confusion caused by the introduced softmax and will remove it in the final version. \\n\\n| | Cora | CiteSeer | PubMed | Computers | Photo | CS | Physics |\\n|:--------:|:--------:| :---------:|:--------:| :--------:| :---------:|:--------:|:--------:| \\n| without softmax | 85.69 | 74.55 | 83.62 | 93.29 | 96.91 | 95.61 | 97.30 | \\n| with softmax | 85.88 | 74.45 | 83.97 | 93.16 | 96.74 | 95.62 | 97.42 | \\n\\n---\\n> Q5. The interpretation of the proposed global attention on the special case of one-dimensional feature. \\n\\nR5. For the case of a one-dimensional feature, **neither** previous approaches **nor** the proposed DUALFormer gather information. $\\\\phi(\\\\mathbf{Q})\\\\phi(\\\\mathbf{K})^T$ in previous methods reduces to a rank-1 matrix, whose rows only differ from each other by a factor since $\\\\phi(\\\\mathbf{K})^T$ is a row vector and $\\\\phi(\\\\mathbf{Q})$ is a column vector. Thus, the aggregation patterns/coefficients for different nodes, represented by the rows of $\\\\phi(\\\\mathbf{Q})\\\\phi(\\\\mathbf{K})^T$, only differ by this factor. As a result, $\\\\phi(\\\\mathbf{Q})\\\\phi(\\\\mathbf{K})^T\\\\mathbf{V}$ degrades to the same aggregation pattern/coefficient for different nodes. Since the essence of aggregation is the different aggregation patterns/coefficients for different nodes, previous approaches lose this characteristic for a one-dimensional feature. Therefore, they also do **NOT** gather information as the proposed DUALFormer in this special case. \\n\\n---\\n> Q6. The motivation for the study is not fully convincing.\\n\\nR6. We hope the above two responses could clarify the rationality of our motivation. Firstly, the key motivation is the obvious equivalence $\\\\phi(\\\\mathbf{Q})(\\\\phi(\\\\mathbf{K})^{\\\\top}\\\\mathbf{V}) = (\\\\phi(\\\\mathbf{Q})\\\\phi(\\\\mathbf{K})^{\\\\top})\\\\mathbf{V}$. Second, **neither** previous approaches **nor** the proposed DUALFormer gather information for the case of a one-dimensional feature. Thanks for your special case. **It also demonstrates the importance of multiple features for the transformer. Thus, the proposed DUALFormer is further justified by exploring the correlation among multiple features with an additional transformer.**\"}", "{\"comment\": \"Thank you for your comprehensive response. Most of my concerns have been addressed, and I have accordingly increased my score.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"comment\": \"I thank the authors for their rebuttal. I have slightly raised my scores due the authors' sincerity. And I will further raise my score if the authors do carefully revise their paper based on the above discussions.\"}", "{\"title\": \"I am inclined to increase my evaluation of the paper more favorably.\", \"comment\": \"Thank you for the detailed response, which effectively addresses most of my concerns. The clarifications provided on the model design and hyperparameter selection were particularly helpful and have improved my understanding of the work. As a result, I am willing to increase my score for the paper.\"}", "{\"comment\": \"I would like to thank the authors for their rebuttal. However, I am confused by the paper and the authors\\u2019 rebuttal, especially on weakness 3. According to the paper, the graph Transformer can learn the global representation and the GNN model behind it can learn some local representation. However, the output of the GT (global representation) is the input to the GNN. Why the GNN can still learn local representation given global inputs?\"}", "{\"title\": \"Response to Reviewer YCph\", \"comment\": \">Q1. Why the GNN can still learn local representation given global inputs?\\n \\nR1. We appreciate your feedback and understand your concerns about the local expressivity of the proposed model. We would like to offer a more nuanced explanation, focusing on two key aspects. \\n \\nFrom a macro perspective, the features obtained from the attention module can be seen as information-rich representations, which the GNN then refines through local message passing to capture the locality of graphs. This aligns with the optimization perspective of graph learning mentioned in the last responses. \\t\\t\\n \\nFrom a micro perspective, these obtained features can be split into two representations: the self-representation and the globally aggregated representation. Accordingly, the GNN module in the proposed model serves as a shared component that updates these representations and eventually merges the updated features. To provide an intuitive understanding, we would like to present the following example. \\n \\nLet $\\\\mathbf{H}$ stands for the initial node representation. Note that the global attention module is designed to aggregate all related features based on the attention score matrix. It is evident that the diagonal elements of the attention matrix are non-zero, as each entity is inherently related to itself. \\n \\nThen, according to the diagonal and off-diagonal elements of the attention matrix, the aggregated representation can be decomposed into two parts: the self-representation $\\\\mathbf{H}\\\\mathbf{W}$ (with coefficients corresponding to the diagonal elements) and the global aggregated representation $\\\\tilde{\\\\mathbf{H}}\\\\mathbf{W}$ (with coefficients corresponding to the off-diagonal elements). Omitting the parameter $\\\\mathbf{W}$, the updated representation can be formulated as $\\\\hat{\\\\mathbf{H}}=\\\\mathbf{H}+\\\\tilde{\\\\mathbf{H}}$\\n \\nAs a result, the outputted representations of the GNN module can be formulated as $GNN(\\\\hat{\\\\mathbf{H}})=GNN(\\\\mathbf{H})+GNN(\\\\tilde{\\\\mathbf{H}})$. Given that $GNN(\\\\mathbf{H})$ is an obvious local representation, it can be concluded that the final representation incorporates local information.\"}", "{\"comment\": \"I appreciate the authors for their detailed rebuttal. However, based on the current version, I do not believe the work is ready for publication. My concerns are as follows:\\n\\nFirst, as the authors claimed, the proposed method actually only capture the relations between each feature. As the results, it is efficient since the dimension of the feature vector is much smaller than the number of nodes. Maybe the results reported in this paper show that the feature-feature attention can lead to better model performance than the node-node attention.\\n\\nSecondly, the experimental results presented by the authors remain unclear and potentially misleading. For instance, in their response to Question 4, they mention a maximum GPU cost of 146 MB. I strongly recommend that the authors carefully review and validate their experimental setup and results to ensure they are accurate and reproducible.\\n\\nLastly, the investigation of the hyperparameter $\\\\alpha$ is insufficient. In the original version of the manuscript, Table 5 indicates that the optimal value of $\\\\alpha$ for the proteins dataset is zero. This finding should be critically examined and explained. Furthermore, the authors state that the search space for $\\\\alpha$ includes 0.1, 0.3, and 0.5, yet they report 0.2 as the optimal value for the arXiv dataset. These inconsistencies raise questions about the reliability and thoroughness of the experimental results.\\n\\nIn light of these issues, I believe the current version of this work requires substantial revisions before it can be considered for acceptance. The current version lacks the necessary rigor and clarity to support the claims made, and a more meticulous examination of the experimental designs and results is warranted.\"}", "{\"title\": \"Response to Reviewer GjDD\", \"comment\": \"We sincerely appreciate your professional and valuable feedback, which has significantly enhanced the quality of our paper. We would like to address each of your concerns individually in response.\\n\\n1. **The performance, total running time, and GPU usage comparison between DUALFormer and NAGphormer on large graphs**. The results can be found in Tab. 9 of the revised manuscript. The results highlight the scalability and effectiveness of DUALFormer while revealing the drawbacks introduced by the GNN module.\\n\\n2. **The analysis of the hyperparameter $\\\\alpha$**. The search range is presented in Section C. 3. The sensitivity analysis of this hyperparameter is detailed in Section D. 3. The findings indicate that DUALFormer exhibits stability to variations in $\\\\alpha$.\\n\\nWe hope these rebuttals have alleviated your concerns regarding 1) the key contributions of our proposed method, 2) the global expressivity of DUALFormer, and 3) the efficiency and scalability of the model, 4) the effectiveness of the feature-feature attention. We are grateful for your expertise and have benefited greatly from our interactions. If you have any more questions or need clarification, please let us know. We look forward to further discussions with you.\"}", "{\"title\": \"Response to Reviewer Q319\", \"comment\": \"> Q1. The paper has some minor errors that need fixing. For example, Table 2 misses the mean value for the GraphGPS model on the Citeseer dataset.\\n\\nR1. Thanks for your careful checking. We will thoroughly check the manuscript to correct any omissions. \\n\\n---\\n> Q2. To enhance readability, Equation 13 should be split into two or three equations. \\n\\nR2. Based on your suggestion, we will divide Equation 13 into three formulas by row. \\n\\n---\\n> Q3. The model DUALFormer places the GNN layers, such as the SGC layers, after the attention layers. What is the rationale behind this design? Is it possible to reverse this order? \\n\\nR3. We would like to explain this design choice as follows. \\n\\nThis choice is primarily motivated by the desire to decouple local and global modules, thereby minimizing their mutual interference. The self-attention module generally relies on input representations to calculate attention coefficients, whereas the GNN module, typically GCN and GAT, utilizes fixed propagation coefficients that are input-independent. Therefore, placing the GNN module after the self-attention module can mitigate their mutual interference and ensure that comprehensive information is retained. Thus, it seems that this order cannot be reversed. \\n\\n---\\n> Q4. Figure 4 shows that the model utilizing APPNP outperforms the one using SGC in the Cora and Pubmed datasets. What accounts for this performance difference?\\n\\nR4. This performance difference is primarily attributed to the difference in the localizing property of these two models. As can be seen in Figure 4, the original APPNP has a performance advantage over the original SGC on the Cora and PubMed datasets. This demonstrates the superiority of the former in terms of localizing property. By designing the global self-attention module in the pairwise dimension of the local GNN module, DUALFormer naturally obtains the global information with the guarantee that it does not interfere with each other. Thus, the DUALFormer based on APPNP with superior localizing property outperforms the DUALFormer based on SGC.\\n\\nThank you for the reminder. It underscores the compatibility of DUALFormer and suggests the potential for further enhancements by integrating it with more advanced GNNs.\\n\\n---\\n> Q5. The effect of certain hyper-parameters, such as the parameter $\\\\alpha$ in Equation 13, on performance has yet to be unverified. \\n\\nR5. Thanks for your careful check. The impact of the hyper-parameter $\\\\alpha$ on model performance is shown below. \\n\\n| | Cora | CiteSeer | PubMed | Computers | Photo | CS | Physics |\\n|:--------:|:--------:| :---------:|:--------:|:--------:| :---------:|:--------:|:--------:|\\n| 0.1 | 85.88$_{\\u00b1 0.10}$| 74.45$_{\\u00b10.39}$ |83.97 $_{\\u00b1 0.43}$| 93.09$_{\\u00b1 0.14}$ |96.74$_{\\u00b10.09}$ | 95.62$_{\\u00b10.05}$| 97.37$_{\\u00b1 0.02}$|\\n| 0.3 | 85.20$_{\\u00b1 0.12}$| 73.69$_{\\u00b1 0.03}$ |83.91$_{\\u00b1 0.07}$ | 93.14$_{\\u00b1 0.15}$ | 96.43$_{\\u00b1 0.07}$| 95.38$_{\\u00b1 0.04}$| 97.42$_{\\u00b10.03}$|\\n| 0.5 | 85.35$_{\\u00b1 0.08}$| 74.06$_{\\u00b10.06}$ | 83.89$_{\\u00b1 0.52}$| 93.16$_{\\u00b1 0.17}$ |96.39$_{\\u00b10.09}$ | 95.52$_{\\u00b1 0.05}$ | 97.39$_{\\u00b1 0.02}$|\\n| Margin | 0.68| 0.39| 0.08 | 0.07 | 0.35 | 0.24 | 0.05 | \\n\\nFrom the table, it can be observed that DUALFormer is not sensitive to the parameter $a$. Specifically, within the parameter selection range, the variation of classification accuracy does not exceed $0.7\\\\%$.\\n\\n---\\n> Q6. The paper does not mention any plans to open-source the code.\\n\\nR6. We promise to open-source the code and provide a GitHub link once the paper is accepted.\"}", "{\"summary\": \"This paper introduces DUALFormer, a novel Graph Transformer model designed to address scalability challenges and improve local-global information fusion. The approach is both simple and theoretically grounded. Extensive experiments demonstrate DUALFormer\\u2019s effectiveness, scalability, and robustness.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"1. This paper is well-motivated.\\n2. The proposed method is simple and effective.\\n3. The inclusion of theoretical analysis strengthens the work.\\n4. Extensive experiments show the effectiveness, scalability and robustness.\\n5. This paper is easy to follow.\", \"weaknesses\": \"1. The proposed method can be interpreted as \\\"attention on attributes\\\". I wonder how is it different from the standard self attention. Especially why it can perform better on node classification? And when it is expected to perform better and when not?\\n2. Can you provide further analysis, such as case studies, to further explain the semantic meanings of the \\\"attention on attributes\\\"?\\n3. Can you provide further analysis and empirical studies to show that the GNNs after the graph Transform can indeed learn the localities in graphs?\\n\\nI will raise my score if my concerns are properly addressed.\", \"questions\": \"N.A.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Respense to Reviewer GjDD\", \"comment\": \"> Q1. As the authors claimed, the proposed method actually only capture the relations between each feature. As a result, it is efficient since the dimension of the feature vector is much smaller than the number of nodes. Maybe the results reported in this paper show that the feature-feature attention can lead to better model performance than the node-node attention.\\n\\nR1. You are right. Firstly, the high efficiency stems from the fact that the dimension of the feature vectors is significantly smaller than the number of nodes. This has been justified through complexity analysis.\\nSecondly, feature-feature attention can lead to better model performance than node-node attention. This is because we ease the conflict between limited training data and modeling large-scale complicated relations between entries. On the one hand, node-node attention in the previous GT needs the model parameters to be accurately trained to model relations of $n^2$ pairs. However, the training data on graphs is often too limited to train them accurately. On the other hand, feature-feature attention in the proposed DUALFormer only requires to model the relations of $f^2$ pairs, which is much less than $n^2$ pairs. Thus, the training requirement of the model parameters is NOT to be as high as in node-node attention, and the training data with the same scale is sufficient. As a result, performance can be improved by easing the conflict between limited training data and modeling relations. \\nFinally, there are some key points we want to clarify. \\nFirstly, the proposed DUALFormer, as its name indicates, consists of two complementary components, i.e., a GNN block for local information and a feature-feature self-attention (SA) block for global information, instead of only capturing relations between features.\\nSecondly, the superiority of the proposed feature-feature is justified by both experiments and theoretical analysis. Theorem 1 demonstrates it can improve the discriminability, which ensures performance enhancement.\\n\\n---\\n> Q2. The experimental results presented by the authors remain unclear and potentially misleading. For instance, in their response to Question 4, they mention a maximum GPU cost of 146 MB. I strongly recommend that the authors carefully review and validate their experimental setup and results to ensure they are accurate and reproducible.\\n\\nR2. Thank you for your feedback. We acknowledge the error in our experimental setup and are correcting it to align with the protocol from \\\"NAGFormer.\\\" We are rerunning the experiments with the widely accepted 50%/25%/25% data split. We are committed to accuracy in our research, and as such, we will provide a detailed report of the updated results and experimental details upon completion. We believe that these revisions will not only address your concerns but also strengthen the integrity of our paper.\\n\\n---\\n> Q3. The investigation of the hyperparameter \\\\alpha is insufficient. In the original version of the manuscript, Table 5 indicates that the optimal value of \\\\alpha for the proteins dataset is zero. This finding should be critically examined and explained. The authors state that the search space for \\\\alpha includes 0.1, 0.3, and 0.5, yet they report 0.2 as the optimal value for the arXiv dataset.\\n\\nR3. Thanks for your meticulous review and for bringing the typographical error to our attention. Upon re-examining the section, we have identified the oversight and can confirm that the intended parameter value is 0.5 not the incorrectly stated 0.\\n\\nWe acknowledge the confusion that arose from our failure to specify the parameter range for the node property prediction task in our initial submission. We would like to clarify that the mentioned parameter range {0.1, 0.3, 0.5} is intended for the node classification task, as stated in Line 912: \\\"For the node classification task, ...\\\". We have identified the correct experimental parameter range for the node property prediction task as {0.1, 0.2, 0.3, 0.4, 0.5}. We understand the importance of this detail and its impact on the interpretation of our results. In the revised manuscript, we will include the results obtained within this range and provide a sensitivity analysis.\\n\\n---\\nWe are carefully incorporating the discussed points into the revised manuscript. This revised version, along with our rebuttal, will be submitted shortly. We sincerely appreciate your professional and valuable suggestions, as they have significantly contributed to enhancing the quality of our paper.\"}", "{\"title\": \"Response to Reviewer GjDD (Part 2)\", \"comment\": \">Q4. The authors should report the overall training cost of each method for efficiency study, especially on large-scale graphs. Maybe authors can refer to the settings in NAGphormer. For instance, can the proposed method achieve more efficient and more powerful performance than NAGphormer on Aminer, Reddit, and Amazon2M?\\n\\nR4. According to your advice, we have compared the training cost in terms of total running time (s) and GPU memory (MB) of the proposed DUALFormer and NAGphormer. The batch size is uniformly set to 2000. The total number of training epochs is set to 100. All shared configurations are set to the same to ensure fairness. The result is shown in the table below. \\n\\n| | AMiner-CS | Reddit | Amazon2M |\\n|:--------:|:--------:| :---------:|:--------:| \\n|#Nodes| 593,486 | 232,965 | 2,449,029 | \\n|#Edges| 6,217,004 | 11,606,919 | 61,859,140 | \\n|#Attributes| 100 | 602 | 100 | \\n|#Classes| 18 | 41 | 47 | \\n| |Accuracy(%) / Time(s) / Memory(MB) | Accuracy(%) / Time(s) / Memory(MB) | Accuracy(%) / Time(s) / Memory(MB) |\\n| NAGphormer | 56.21$_{\\u00b1 0.42}$ / 38.51 / 84 | 93.58$_{\\u00b1 0.05}$ / 30.88 / 140 | 83.97$_{\\u00b1 0.43}$ / 568.91 / 146 | \\n| DUALFormer | 58.56$_{\\u00b1 0.50}$ / 2.32 / 30 | 94.71$_{\\u00b1 0.07}$ / 6.82 / 64 |84.80$_{\\u00b1 0.22}$ / 40.38 / 26 | \\n\\nFrom the table, two results can be observed: firstly, the proposed DUALFormer consistently outperforms NAGpormer across the three datasets, and secondly, the proposed DUALFormer has short running times across the three datasets compared to the baseline NAGpormer. The advantage of DUALFormer primarily stems from its elimination of the need for preprocessing to acquire structural encoding and storage, unlike NAGphormer, which requires such steps. This agrees with the conclusion of the complexity analysis.\\n\\n---\\n>Q5. As shown in Section 4.2, DUALFormer relies on the sampling strategy to perform on large-scale graphs, just like advanced linear graph Transformers. Hence, I think the GPU memory comparison is questionable since it is largely related to the batch size. Do authors set the same batch for each method?\\n\\nR5. I understand your concern about fairness. The common hyper-parameters (including batch size) are the **same for each model**, as mentioned in Line 953. Specifically, all models are trained on ogbn-arxiv using full batch, whereas, for ogb-products and pokec, the batch size is set to 10K. We will explicitly mention this in the captions of the corresponding tables and figures in the revised manuscript.\\n\\n---\\n>Q6. The analysis of the $\\\\alpha$ is missing. According to Table 5, the performance of DUALFormer could be sensitive to the value of $\\\\alpha$. So, the parameter analysis of $\\\\alpha$ should be added into the experiment section.\\n\\nR6. Thanks for your careful check. The impact of the hyper-parameter $\\\\alpha$ on model performance is shown below.\\n\\n| | Cora | CiteSeer | PubMed | Computers | Photo | CS | Physics |\\n|:--------:|:--------:| :---------:|:--------:|:--------:| :---------:|:--------:|:--------:|\\n| 0.1 | 85.88$_{\\u00b1 0.10}$| 74.45$_{\\u00b10.39}$ |83.97 $_{\\u00b1 0.43}$| 93.09$_{\\u00b10.14}$ |96.74$_{\\u00b10.09}$ | 95.62$_{\\u00b10.05}$| 97.37$_{\\u00b1 0.02}$|\\n| 0.3 | 85.20$_{\\u00b1 0.12}$| 73.69$_{\\u00b1 0.03}$ |83.91$_{\\u00b1 0.07}$ | 93.14$_{\\u00b10.15}$ | 96.43$_{\\u00b1 0.07}$| 95.38$_{\\u00b1 0.04}$| 97.42$_{\\u00b10.03}$|\\n| 0.5 | 85.35$_{\\u00b1 0.08}$| 74.06$_{\\u00b10.06}$ | 83.89$_{\\u00b1 0.52}$| 93.16$_{\\u00b10.17}$ |96.39$_{\\u00b10.09}$ | 95.52$_{\\u00b1 0.05}$ | 97.39$_{\\u00b1 0.02}$|\\n| Margin | 0.68| 0.39| 0.08 | 0.07 | 0.35 | 0.24 | 0.05 | \\n\\nFrom the table, it can be observed that DUALFormer is not sensitive to the parameter $a$. Specifically, within the parameter selection range, the variation of classification accuracy does not exceed $0.7\\\\%$.\"}", "{\"summary\": \"This paper develop a new architecture based on GNNs and modified Transformers. The authors conduct expensive experiments as well as theoretical analysis to show the effectiveness of the proposed method.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. This paper is easy to follow.\\n2. The authors provide the theoretical analysis.\\n3. The results on various datasets seem to be promising.\", \"weaknesses\": \"1. The comparison of efficiency study seems to be not reasonable.\\n2. The key contributions of the proposed method are not clear.\\n3. The complexity analysis of the proposed method seems to be wrong.&#x20;\", \"questions\": \"I have the following questions:\\n1. As the authors claim in Eq. 13, the proposed method only captures the feature-to-feature correlations. In my opinion, it is not the global information on the graph since it is unable to capture the relations between nodes. Why do authors claim the proposed method can capture the global information on the graph?\\n2. According to the paper, the efficiency is the most important contribution of the proposed method. I think the authors express this point in a wrong way. Firstly, the authors claim that the computational complexity of the proposed method is $O(n)$ which is obviously wrong. Based on Eq. 14, the calculation involves the adjacency matrix. Hence, the computational complexity of this part is $O(E)$ and it is cannot be ignored since $|E|>|N|$ \\uff08even $|E|>>|N|$ on some graphs). Then, the authors only compare the time cost of each epoch to demonstrate the efficiency which is not reasonable. I think the total training time cost is the most important metric to demonstrate the efficiency of a method. So, the authors should report the overall training cost of each method for efficiency study, especially on large-scale graphs. Maybe authors can refer to the settings in NAGphormer. For instance, can the proposed method achieve more efficient and more powerful performance than NAGphormer on Aminer, Reddit and Amazon2M?\\n3. As shown in Section 4.2, DUALFormer relies on the sampling strategy to perform on large-scale graphs, just like advanced linear graph Transformers. Hence, I think the GPU memory comparison is questionable since it is largely related to the batchsize. Do authors set the same batch for each method?\\n4. The analysis of the $\\\\alpha$ is missing. According to Table 5, the performance of DUALFormer could be sensitive to the value of $\\\\alpha$. So, the parameter analysis of $\\\\alpha$ should be added into the experiment section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer YCph\", \"comment\": \">Q1. The proposed method can be interpreted as \\\"attention on attributes\\\". I wonder how is it different from the standard self attention. Especially why it can perform better on node classification? And when it is expected to perform better and when not?\\n\\nR1. Firstly, both the proposed self-attention on dimension regarding attributes and standard self-attention are to capture global information despite their different forms. The former approximately describes the global dependence between nodes, which is the main role of the latter, by characterizing the correlation between features. Secondly, the performance boost is not due to this alone but rather the dual design of local and global modules. This prevents the trade-off between local and global information and enables comprehensive information modeling. Finally, the proposed method improves the performance of GNNs by capturing the relationships between features. They perform better when there is a strong correlation among features, and their effectiveness may be less effective when such correlations are weak. \\n\\n---\\n>Q2. Can you provide further analysis, such as case studies, to further explain the semantic meanings of the \\\"attention on attributes\\\"? \\n\\nR2. The semantic meaning of the proposed attribute (feature) attention is that it focuses on the correlation among features, allowing the model to capture the information that is most discriminative for the task. We would like to provide the following case to illustrate this point. \\n\\nSuppose there are five nodes with four features, where three of these nodes (the index are the first three) belong to one class, and the other two belong to the other class. When the feature matrix exhibits low-class discriminability, the matrix can be exemplified by $\\\\mathbf{H}=$\\n\\n[[ $\\\\frac{1}{3}$, $\\\\frac{2}{3}$, 0, 0 ],\\n\\n[ $\\\\frac{2}{3}$, $\\\\frac{1}{3}$, 0 , 0 ],\\n\\n[ 0, 1, 0, 0 ], \\n\\n[ 0, 0, 1, 0 ], \\n\\n[ 0, 0, 0, 1 ]]$_{5\\\\times 4},$ where the rows correspond to nodes and columns to features. \\n\\nAssuming a clear feature correlation, e.g., the first two features signal the first class, while the last two features correspond to the second class. The attention score matrix can be expressed as $\\\\mathbf{S}=$\\n\\n[[ $\\\\frac{1}{2}$, $\\\\frac{1}{2}$, 0, 0 ],\\n\\n[ $\\\\frac{1}{2}$, $\\\\frac{1}{2}$, 0 , 0 ],\\n\\n[ 0, 0, $\\\\frac{1}{2}$, $\\\\frac{1}{2}$ ], \\n\\n[ 0, 0, $\\\\frac{1}{2}$, $\\\\frac{1}{2}$ ]]$_{4 \\\\times 4}$. \\n\\nUsing feature attention, the updated features can be expressed as $\\\\hat{\\\\mathbf{H}}= \\\\mathbf{H}\\\\mathbf{S} =$ \\n\\n[[ $\\\\frac{1}{2}$, $\\\\frac{1}{2}$, 0, 0 ],\\n\\n[ $\\\\frac{1}{2}$, $\\\\frac{1}{2}$, 0 , 0 ],\\n\\n[ $\\\\frac{1}{2}$, $\\\\frac{1}{2}$, 0, 0 ], \\n\\n[ 0, 0, $\\\\frac{1}{2}$, $\\\\frac{1}{2}$ ], \\n\\n[ 0, 0, $\\\\frac{1}{2}$, $\\\\frac{1}{2}$ ]]$_{5\\\\times 4}.$\\n\\nThe updated features exhibit more obvious class discriminability compared to the input features. \\n\\n---\\n>Q3. Can you provide further analysis and empirical studies to show that the GNNs after the graph Transform can indeed learn the localities in graphs?\\n\\nR3. We would like to provide the following theoretical analysis to explain that the GNN module is able to learn the locality of the graph. \\n\\nFirstly, from the perspective of graph learning, many classical GNNs (e.g., GCN and SGC) can be induced by optimizing the objective function [1, 2], namely \\n\\n$tr(\\\\mathbf{H}^{\\\\top}\\\\tilde{\\\\mathbf{L}}\\\\mathbf{H})=\\\\frac{1}{2}\\\\sum_{v,u}\\\\tilde{a}_{v,u}\\\\Vert \\\\mathbf{h}_v-\\\\mathbf{h}_u\\\\Vert_2^2$, \\n\\nwhere $\\\\tilde{\\\\mathbf{L}}$ denotes the Laplacian matrix of the normalized adjacent matrix $\\\\tilde{\\\\mathbf{A}}$, and $\\\\mathbf{H}$ stands for the node features such as $\\\\mathbf{H}=\\\\mathbf{X}\\\\mathbf{W}$ in GCN and $\\\\mathbf{H}= \\\\mathbf{X}$ in SGC. This indicates that GNNs essentially learn local information through feature updates that are constrained by the graph topology.\\n\\nFrom the above perspective, the GNN module in the proposed DUALFormer is equivalent to solving the above objective function with $\\\\mathbf{H}=\\\\mathbf{Z}$, that is, $tr(\\\\mathbf{H}^{\\\\top}\\\\tilde{\\\\mathbf{L}}\\\\mathbf{H})$, where $\\\\mathbf{Z}$ denotes the node features obtained from the self-attention on the dimension regarding features. Thus, even as a post-processing technique, the GNN module can ensure localizing property by leveraging graph topology to constrain the feature updates. \\n\\n[1] Interpreting and Unifying Graph Neural Networks with An Optimization Framework. WWW 2021\\n\\n[2] Why Do Attributes Propagate in Graph Convolutional Neural Networks? AAAI 2021\"}", "{\"metareview\": \"In this submission, the authors proposed a new member of Graph-oriented Transformer models with advantages in performance and computational efficiency. In particular, the authors proposed a new architecture separating local and global self-attention modules, in which a linearized Transformer is applied to reduce the complexity of the global self-attention module. AC and reviewers agree that the study of Graph Transformer is an important topic for the community, and the design of the proposed model is reasonable to some extent, making the model applicable for large-scale graphs.\\n\\nIn the rebuttal phase, the authors provided detailed feedback, including more analytic experiments and explanations. At the same time, the paper was revised carefully. The reviewers' concerns, which are mainly about the rationality of the architecture and the solidness of the experiments, have been resolved successfully.\\n\\nIn summary, AC decided to accept this work.\", \"additional_comments_on_reviewer_discussion\": \"The reviewers interacted with the authors. Most of the reviewers were satisfied with the authors' rebuttals and increased their scores. After reading the submissions, the comments, and the rebuttals, AC has decided to accept this work.\"}" ] }
4ub9gpx9xw
Walk the Talk? Measuring the Faithfulness of Large Language Model Explanations
[ "Katie Matton", "Robert Ness", "John Guttag", "Emre Kiciman" ]
Large language models (LLMs) are capable of generating *plausible* explanations of how they arrived at an answer to a question. However, these explanations can misrepresent the model's "reasoning" process, i.e., they can be *unfaithful*. This, in turn, can lead to over-trust and misuse. We introduce a new approach for measuring the faithfulness of LLM explanations. First, we provide a rigorous definition of faithfulness. Since LLM explanations mimic human explanations, they often reference high-level *concepts* in the input question that purportedly influenced the model. We define faithfulness in terms of the difference between the set of concepts that the LLM's *explanations imply* are influential and the set that *truly* are. Second, we present a novel method for estimating faithfulness that is based on: (1) using an auxiliary LLM to modify the values of concepts within model inputs to create realistic counterfactuals, and (2) using a hierarchical Bayesian model to quantify the causal effects of concepts at both the example- and dataset-level. Our experiments show that our method can be used to quantify and discover interpretable patterns of unfaithfulness. On a social bias task, we uncover cases where LLM explanations hide the influence of social bias. On a medical question answering task, we uncover cases where LLM explanations provide misleading claims about which pieces of evidence influenced the model's decisions.
[ "large language models", "faithful explanations", "explainability", "safety", "counterfactual reasoning" ]
Accept (Spotlight)
https://openreview.net/pdf?id=4ub9gpx9xw
https://openreview.net/forum?id=4ub9gpx9xw
ICLR.cc/2025/Conference
2025
{ "note_id": [ "qKx8e4qcmT", "p8q8vtXIIh", "kml1BaAhY4", "kAiru3b0re", "jKNicDVCUv", "fG5EOalYup", "eBOsqxNBQQ", "Z1KbBxt4Qr", "YFc8poPyI5", "TtRPxHkgjR", "QV7UVOrJ2t", "Q9yExqwJcd", "LHvZXTrtXQ", "IMVu1xRHw0", "CETnAfxi56", "6Ken1uqUHu", "4rt97lJYHH", "3SgWhHWRJI", "3EpxsTDavD", "2PLtVCZao3" ], "note_type": [ "official_review", "official_comment", "official_comment", "official_comment", "meta_review", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review" ], "note_created": [ 1730640054082, 1732298389830, 1732415614216, 1732298501928, 1734738390109, 1731427756749, 1732297821179, 1732747517474, 1732747752083, 1730661499216, 1732298205405, 1737524168061, 1732298341314, 1732298824139, 1732300428517, 1732298860715, 1732298756878, 1732298625483, 1732299091850, 1730320245426 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission12125/Reviewer_uDtA" ], [ "ICLR.cc/2025/Conference/Submission12125/Authors" ], [ "ICLR.cc/2025/Conference/Submission12125/Reviewer_7q6M" ], [ "ICLR.cc/2025/Conference/Submission12125/Authors" ], [ "ICLR.cc/2025/Conference/Submission12125/Area_Chair_XDBm" ], [ "ICLR.cc/2025/Conference/Submission12125/Reviewer_L2o3" ], [ "ICLR.cc/2025/Conference/Submission12125/Authors" ], [ "ICLR.cc/2025/Conference/Submission12125/Authors" ], [ "ICLR.cc/2025/Conference/Submission12125/Authors" ], [ "ICLR.cc/2025/Conference/Submission12125/Reviewer_7q6M" ], [ "ICLR.cc/2025/Conference/Submission12125/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission12125/Authors" ], [ "ICLR.cc/2025/Conference/Submission12125/Authors" ], [ "ICLR.cc/2025/Conference/Submission12125/Authors" ], [ "ICLR.cc/2025/Conference/Submission12125/Authors" ], [ "ICLR.cc/2025/Conference/Submission12125/Authors" ], [ "ICLR.cc/2025/Conference/Submission12125/Authors" ], [ "ICLR.cc/2025/Conference/Submission12125/Authors" ], [ "ICLR.cc/2025/Conference/Submission12125/Reviewer_5TDz" ] ], "structured_content_str": [ "{\"summary\": \"A formulation of faithfulness is presented called causal concept faithfulness. According to this formulation, when a model produces a natural language explanation of its behaviour that appeals to certain concepts (e.g. gender), altering the input to have a different concept value (e.g. changing the gender of person mentioned in the text), should also alter the behaviour. Thus, under this formulation, a model is faithful if and only if it appeals to exactly those concepts that\\u2014if altered\\u2014would actually change its behaviour. To measure faithfulness the correlation between two metrics is used: (1) the probability that a concept is mentioned in an explanation; and (2) the actual change after altering the inputs, the KL divergence between the model's output distribution before and after alteration is used.\\nTo avoid having to measure these values on very large datasets, the authors propose to use a Bayesian hierarchical model, which 'partially pools' information across interventions for related concepts.\\nExperiments are performed on two tasks. The first is a task designed to elicit unfaithful explanations. Models perform poorly w.r.t. two out of three concepts. In the second task, models are shown to have limited faithfulness when explaining their answers for medical question answering.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"Precise definition of what is meant by faithfulness.\", \"'causal concept faithfulness' as proposed will be useful for the explainability community.\", \"The paper is written well, while being information-dense.\"], \"weaknesses\": [\"The use of GPT4o to generate counterfactual inputs is not evaluated independently.\"], \"questions\": [\"In Figure 1, it appears as though the correlation between EE and CE would be significantly lower if done independently for each concept, and then averaged. My question is: is calculating faithfulness on a per-concept basis is possible with your method?\", \"And a related question, given that pearson correlation only measures to what extent points lie on *a* line, and not on *the* line y=x, is it the most appropriate metric for you use case, did you consider others?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We thank the reviewer for their thorough review. We appreciate their insightful feedback and thoughtful comments and questions, which have helped us to strengthen the paper. We have now uploaded a new version with changes based on their feedback.\\n\\nWe have grouped the reviewer\\u2019s comments by theme and addressed each below.\"}", "{\"comment\": \"Thank you for all your effort and time in answering my review. The authors addressed all my comments in a satisfactory way, so I'll revised my score accordingly\"}", "{\"title\": \"Additional experiments to improve the validation of our method\", \"comment\": \"Thanks to the reviewer\\u2019s feedback, we now have added new experiments to improve our experimental validation: (1) we analyze our method in a faithful setting, (2) we examine the robustness of our method to dataset size, and (3) we are in-progress on experiments with opensource LLMs, and will provide an update on that before the end of the discussion period.\\n\\n**Insufficient validation of the proposed approach: the authors mention that their method can be used to quantify and discover interpretable patterns of unfaithfulness. However, there is no guarantee that the methodology detects truly unfaithful concepts. To further ensure the correctness of the approach, it would be nice to show linear agreement between CE and EE in a controlled setting where LLMs are known to only produce faithful explanations (e.g., unbiased setting).**\\n\\nIt sounds like the reviewer is concerned with the potential for our method to report that LLM explanations are unfaithful, when they are actually faithful. We agree that this is an important concern. To address it, we have added an analysis focused on the setting in which we expect LLMs to produce faithful explanations, as you\\u2019ve suggested. In the updated version of the paper, we refer to this analysis in Section 4.1 in the main body (lines 415-417) and include the details in Appendix E.2. In this new experiment, we focused on the small subset of questions in the BBQ dataset that have objective answers (e.g., the top question in Table 28 says that one individual \\u201cstayed in the driver\\u2019s seat\\u201d and then asks \\u201cwho stayed in the car?\\u201d). When answering these questions, we expect LLMs to use the evidence (since it is conclusive) rather than relying on stereotypes, and hence, we expect LLM explanations to be faithful. Our results confirm that we find this expected trend. On the objective BBQ questions, all models obtain high faithfulness scores of $\\\\mathcal{F}(\\\\mathbf{X}) \\\\geq 0.95$ (where $\\\\mathcal{F}(\\\\mathbf{X}) = 1$ is perfectly faithful). See Appendix E.2 for further details.\\n\\n**Small evaluation set and concerns about generalizability: for the two question-answering, the authors estimate the causal effects (CE) and explanation-implied (EE) effects for 30 examples (using 50 generations per each dataset). However, it is unclear how robust these results are and whether these would generalize to larger models. Perhaps the authors could show how the faithfulness score varies as the number of datapoints increases, therefore, providing an idea of the variability and stability of the scores.**\\n\\nWe thank the reviewer for pointing out this valid concern. As suggested, we have now added an analysis of the robustness of our method to the dataset size, which shows that the results are stable. Specifically, we repeated our analysis with the number of examples N = 5, 10, 15, 20, 25, 30. For each value of N, we obtain 1000 samples by bootstrapping. We plot N against the mean faithfulness score and include error bars for the standard deviation. For N >= 15, the mean faithfulness scores (i.e., Pearson correlation coefficients) are highly stable: they are all within $0.03$ of each other. We refer to this analysis In Section 6 in the main body of the paper and provide details in Appendix E.3. In addition, we now mention the small evaluation set size as a limitation in the paper (Section 6, lines 520-524).\\n\\n**Generalizability to open-source models: the paper carries analyses on two OpenAI models (GPT-3.5, GPT-4o) and one Anthropic model (Claude-3.5-sonnet). Could be interesting to contextualize the faithfulness of existing open source models (e.g., Llama 3.1) and draw comparisons with closed-source ones.**\\n\\nWe agree with this point. We are working on adding experiments in which we assess the faithfulness of Llama 3.1 models. We will provide an update on that when we finish these new experiments. We expect this to be before the end of the review period.\"}", "{\"metareview\": \"This paper studies the question of whether explanations provided by LLMs for their behaviors are in fact faithful to the actual behaviors. This requires formalizing faithfulness, developing a methdology to measure it, and then validating the methodology. All reviewers appreciated the problem, found the paper clear, and the contribution significant. The main concern here was whether the method was sufficiently validated, but overall this seems to be a minor consideration.\", \"additional_comments_on_reviewer_discussion\": \"All reviewers were positive at all stages\"}", "{\"summary\": \"This paper aims to measure the faithfulness of LLM generated natural language explanations in terms of specific concepts mentioned in it. Specifically, faithfulness is measured by the correlation between the causal effect of a concept (measured by counterfactual predictions) and the likelihood of the concept being mentioned in the explanation. This analysis produces several interesting results, e.g., the model doesn\\u2019t mention gender in its explanation despite gender having large causal effect on its prediction; safety alignment can affect model explanation.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": [\"Inspecting more finegrained faithfulness is a novel contribution and it allows us to gain better understanding of specific biases in model explanations.\", \"The paper proposes a principled method to quantify faithfulness based on counterfactual examples.\", \"The finding that safety alignment can make the model hide true reasons (e.g., gender bias) in its explanation (thus is only a form of shallow alignment) is very interesting.\"], \"weaknesses\": [\"It is unclear how the explanations are generated, e.g., are these CoT from zero shot or few shot prompting? Is explanation generated before or after model prediction? It would be interesting to analyze how different prompting methods change the result or improve faithfulness.\"], \"minor\": [\"152: distinct -> disentangled might be a more precise word\", \"194: typo: x in the equation should be in vector form\"], \"questions\": [\"Here the causal concept effect is considered the ground truth in some sense. Then would it make sense to directly explain the model prediction using the causal concept effect?\", \"For the dataset level faithfulness, instead of averaging question level faithfulness, why not directly measure PCC of all examples in the dataset?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer Feedback\", \"comment\": \"We thank all of the reviewers for taking the time to carefully review our paper and to provide thoughtful feedback and questions. Their feedback has helped us to strengthen the paper.\\n\\nWe have now uploaded a new version in which we have made changes based on reviewer feedback. In the main body of the paper, we have indicated the parts we have added in red. We have also added multiple new appendices.\\n\\nWe respond to the comments of each individual reviewer by replying to them directly below.\"}", "{\"title\": \"update on analysis of impact of prompting strategy on LLM faithfulness\", \"comment\": \"As an update, we have now completed an analysis of the choice of prompting strategy on faithfulness. We have posted an updated version of the paper that includes this new experiment. We refer to it in the main body (see lines 414-417) and provide details in Appendix E.4.\\n\\nWe repeated our experiments on the social bias task using a prompt that explicitly encourages the model to avoid stereotypes. Specifically, we use the same few-shot chain-of-thought prompt used in our other experiments, with one additional statement: \\u201cPlease ensure that your answer is unbiased and does not rely on stereotypes.\\u201d For GPT-4o and Claude-3.5-Sonnet, we found that this choice of prompt had little effect on the results. For these two LLMs, the faithfulness scores changed by less than $0.04$ PCC, and the category-specific trends are mainly unchanged. For GPT-3.5, we found that using this \\u201canti-bias\\u201d prompt *decreases* faithfulness. While this might appear somewhat surprising, our method surfaces concept-level unfaithfulness patterns that help to explain this. When using the anti-bias prompt, interventions on behavior-related concepts have less of a causal effect on GPT-3.5\\u2019s answers compared to using the standard prompt. Despite this, GPT-3.5\\u2019s explanations still reference behavior concepts at high rates. As a result, the prompt leads to a reduced faithfulness score. To further explain this, we observe that the reduced effects of the behavior concepts seems to stem from the fact that GPT-3.5 tends to more frequently select \\u201cundetermined\\u201d as its answer when using the anti-bias prompt, regardless of the intervention. Further details of this experiment are in Appendix E.4.\\n\\nWe think this experiment nicely highlights the utility of our method. Our approach enables us to not only quantitatively assess the impact of the choice of prompting strategy on faithfulness but also to understand *why* using a certain prompt results in more or less faithful explanations. We thank the reviewer for suggesting this experiment, as we think including it substantially improved the paper.\"}", "{\"title\": \"update on analysis of open source LLMs\", \"comment\": \"We thank the reviewer for the time spent reviewing our rebuttal. We are happy that they found that we addressed their comments in a satisfactory way, and we greatly appreciate that they updated their score.\\n\\nAs an update, we have now completed experiments with Llama-3.1-8B-Instruct as the LLM. We are still working on experiments with Llama-3.1-70B-Instruct \\u2013 it has taken time to figure out how to run this model with our limited computational resources. We now have experiments running for the 70B-parameter model, and we will provide an update once they are complete.\\n\\nWe plan to add the results from both of the Llama models to the main body of the paper once these experiments have finished. For now, we have included the results for Llama-3.1-8B-Instruct on the social bias task in Appendix E.5. Interestingly, we found that Llama-3.1-8B-Instruct obtains the highest faithfulness score (0.81) of all LLMs on the social bias task (GPT-3.5 obtains the second highest score of 0.75). This result further supports our finding that the smaller, less capable LLMs obtain higher faithfulness scores than the state-of-the-art LLMs. When we examined the concept category results, we found that Llama-3.1-8B exhibits the same pattern of unfaithfulness as the GPT models (although to a lesser degree): its explanations cite behavior-related concepts regardless of their causal effects and omit identity-related concepts regardless of their causal effects.\\n\\nWe also applied Llama-3.1-8B-Instruct to the medical question answering task, using the same few-shot prompt we used for the other LLMs. However, we found that rather than answering the questions, Llama-3.1-8B-Instruct generates new questions. This makes our faithfulness analysis inapplicable (since there are no answers/explanations to analyze). So far, it appears that Llama-3.1-70B-Instruct does not have this same issue -- it answers the questions as expected. We plan to include results for this larger model on medical question answering.\"}", "{\"summary\": [\"**Summary**:\", \"This paper adopts a causal inference approach to define and evaluate the faithfulness of LLM-generated explanation in the context of two question answering tasks. The obtained results X.\", \"**Main contributions**: The main contributions are the definition and methodology proposed to assess the faithfulness of explanations.\", \"**Methodology**:\", \"Key to the methodology is the assumption that a model is faithful if its explanations consist of only concepts that are impactful for the decision (i.e., have large causal effects) (lines 183-185).\", \"The authors first compute the causal effect associated with each concept in the explainability (CE). An auxiliary LLM is used to determine the _explainable_ concepts and produce counterfactual perturbations for each input x. CE is then estimated by contrasting the distributional differences between LLM responses when given the modified inputs vs the original inputs.\", \"Then the authors determine the prevalence of each concept appearing in the explanation (EE).\", \"Finally, the authors determine the linear alignment between CE and EE (dubbed causal concept faithfulness) for each example using the pearson correlation coefficient. Dataset-level faithfulness score is the average over the examples.\", \"**Writing**: Overall the writing is clear and easy to follow! The authors did a good job in exposing the ideas. Consider the following comments to further increase clarity:\", \"Add information about when the experiments were run with each model.\", \"lines 321-323: you describe the colors for each of the concept categories. However there seems to be a mismatch between the category color in the image and the color described in text.\"], \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"Causally inspired definition and methodology to assess the faithfulness of LLM-generated explanations.\", \"Empirical validation of proposed methodology in two different question answering datasets.\", \"The finding that GPT-3.5 produces more faithful explanations (at a dataset-level) than the more recent and advanced models (GPT-4o and Claude 3.5 sonnet) is interesting. They also show that unfaithful explanations by GPT-3.5 is more harmful than GPT-4o\", \"The analysis concerning the impact of different types of interventions (i.e., remove concept vs swap it with a different value) is interesting, revealing the brittleness of safety guards employed in GPT-3.5 and GPT-4o.\"], \"weaknesses\": \"1. **Insufficient validation of the proposed approach**: the authors mention that their method can be used to quantify and discover interpretable patterns of unfaithfulness. However, there is no guarantee that the methodology detects truly unfaithful concepts. To further ensure the correctness of the approach, it would be nice to show linear agreement between CE and EE in a controlled setting where LLMs are known to only produce faithful explanations (e.g., unbiased setting).\\n2. **Important parameters of the experiments are not clear in the paper, which may affect reproducibility of the experiments**. The authors could consider providing additional details about the exact number of perturbations generated for each example, the number of generations used to estimate P(Y|X) (during the definition of CE), the decoding hyperparameters (including the temperature and max number of tokens). Additional details should also be provided about how each response y is parsed \\u2013 this is particularly relevant given that the evaluated models are known to produce nuanced and semantically equivalent texts.\\n2. **Small evaluation set and concerns about generalizability**: for the two question-answering, the authors estimate the causal effects (CE) and explanation-implied (EE) effects for 30 examples (using 50 generations per each dataset). However, it is unclear how robust these results are and whether these would generalize to larger models. Perhaps the authors could show how the faithfulness score varies as the number of datapoints increases, therefore, providing an idea of the variability and stability of the scores.\\n3. **Univariate counterfactuals**: if I understood correctly, the proposed framework focuses on perturbing the sentences one concept at a time, irrespective of the correlations between features. However, this fails to account for the correlations between different features (e.g., name of schools or organizations is related to socio-demographic features). \\n4. **Generalizability to open-source models**: the paper carries analyses on two OpenAI models (GPT-3.5, GPT-4o) and one Anthropic model (Claude-3.5-sonnet). Could be interesting to contextualize the faithfulness of existing open source models (e.g., Llama 3.1) and draw comparisons with closed-source ones.\", \"questions\": \"1. One of the decisions in the paper is to use an auxiliary model to extract concepts, propose a list of alternative values for each concept. Why is this necessary and is there any assumption or requirement that helped settling for a GPT-4o as the auxiliary LLM? The authors could better motivate the selection of GPT-4o in their paper, perhaps by including a small human study comparing the effectiveness of different models in extracting concepts and creating the list of alternate values. The authors should also consider including the prompt used to extract concepts and concept values in the Appendix.\\n2. In line 218, the authors mention the use of auxiliary LLM to \\u201clist distinct concepts in the context of x\\u201d. What kind of verifications were performed to ensure that the extracted concepts and their list of concepts were meaningful? The authors should consider adding a list of extracted concepts and their list of values to the Appendix. They should also consider adding more details about the validation (e.g., manual validation or llm-as-a-judge approach).\\n3. Similarly, to the two questions above, in line 224-225, the authors mention \\u201cto generate each counterfactual, we instruct [auxiliary LLM] to edit the question x by changing the value of [concept] ..., while keeping everything else the same\\u201d. However there seems to be no validation of this claim. Did the authors validate that the perturbed input x was indeed minimally distant from x? If not, the authors should consider including such analysis, perhaps by showing the minimum edit distance or n gram overlap between the modified and original inputs.\\n4. Why did the authors select a linear correlation coefficient as opposed to a non-linear coefficient?\\n5. In Figure 1 (right), we observe that different behavioral concepts end in different regions of the scatter plot (there are orange points in the top and orange points around EE in [-0.5, -1.5]. Is there any insight or pattern that justify why there are different clusters? Could it be that the model is less prone to use some concepts for specific demographics?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We thank the reviewer for their careful reading and valuable feedback. We appreciate that they recognize the novelty of our contribution, and in particular, our introduction of a principled method for assessing faithfulness in a fine-grained manner. We are grateful that they found the new insights about LLM faithfulness and safety alignment produced by our method interesting.\\n\\nWe also thank the reviewer for helping to identify places where the clarity of our paper could be improved. We have revised the paper based on this feedback. We address individual points below.\\n\\n**It is unclear how the explanations are generated, e.g., are these CoT from zero shot or few shot prompting? Is explanation generated before or after model prediction? It would be interesting to analyze how different prompting methods change the result or improve faithfulness.**\\n\\nWe agree that this was not clear in the original version of the paper. We have now clarified this in the main text (see Section 4.1 line 286 and Section 4.2 line 431) and provided the exact prompts used in Appendix D.3 (see Table 24 for the BBQ prompt and Table 25 for the MedQA prompt). In both the BBQ and MedQA experiments, we use a prompt that includes three few-shot examples and a chain-of-thought trigger (i.e., \\u201clet\\u2019s think step-by-step\\u201d). The prompt directs the model to produce the explanation before the prediction. \\n\\nWe agree that it would be interesting to analyze the impact of the choice of prompting strategy on faithfulness. We are now working on an analysis in which we repeat our experiments on the social bias task using a different prompt. More specifically, we seek to understand if using a prompt that specifically directs the LLM to avoid social bias will lead to improved faithfulness. In our initial results, it appears that the difference is minimal, but we will provide a more complete report once this analysis is finished. We expect this to be done before the end of the review period.\\n\\n**Minor: 152: distinct -> disentangled might be a more precise word**\\n\\nWe have updated the paper accordingly.\\n\\n**194: typo: x in the equation should be in vector form**\\n\\nThank you for catching this. We have now updated it in the paper.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Spotlight)\"}", "{\"comment\": \"We thank the reviewer for their thoughtful questions. We respond to each below.\\n\\n**Q: Here the causal concept effect is considered the ground truth in some sense. Then would it make sense to directly explain the model prediction using the causal concept effect?**\\n\\nYes, we think that our method for assessing the causal effects of concepts could itself be used as an explainability method. As we saw in our experiments in this paper, examining the magnitude of the causal effect of each concept provides an understanding of which factors influence LLM decisions and which do not. Existing work has explored the idea of using causal concept effects to explain model decisions in settings that are different from ours [1][2]. In particular, prior work assumes that all questions contain the same set of concepts and that concepts/concept values are manually annotated. In contrast, we allow questions to have different concepts and automatically extract them with an LLM. Moreover, we are the first work to present a Bayesian hierarchical modeling approach to estimating concept effects. This approach has the advantage of leveraging shared information across questions, while still allowing for questions to have different concepts and still capturing question-specific variability.\\n\\nWhile we think our method has promise as an explainability approach, additional work is needed to validate it for this purpose (e.g., experimental comparisons to [1][2]). That is out-of-scope for this paper, which is focused on examining LLM faithfulness. However, we plan to explore it in future work.\\n\\n[1] Gat, Yair, et al. \\\"Faithful explanations of black-box nlp models using llm-generated counterfactuals.\\\" arXiv preprint arXiv:2310.00603 (2023).\\n[2] Abraham, Eldar D., et al. \\\"Cebab: Estimating the causal effects of real-world concepts on nlp model behavior.\\\" Advances in Neural Information Processing Systems 35 (2022): 17582-17596.\\n\\n**Q: For the dataset level faithfulness, instead of averaging question level faithfulness, why not directly measure PCC of all examples in the dataset?**\\n\\nThis is a good question. We thought about this carefully when developing our method. The primary reason for averaging question-level faithfulness is that we think that measuring the dataset-level PCC can be misleading in some cases. In particular, it is possible to have a case in which an LLM\\u2019s explanations incorrectly order concepts by their causal effects *within* each question, but when looking *across* questions, the PCC is high (as in Simpon\\u2019s Paradox). This can happen if on certain questions the causal effects and explanation implied effects of concepts are systematically higher than on other questions. In this case, the low within-question PCC implies that the explanations provided for each individual question do not correctly refer to the most influential concepts for that question, which makes them unfaithful/misleading. But the high dataset-level PCC fails to capture this. We have now added a discussion of this point to Appendix G, which we refer to in the main body of the paper (line 201).\"}", "{\"title\": \"Other Comments\", \"comment\": \"**Univariate counterfactuals: if I understood correctly, the proposed framework focuses on perturbing the sentences one concept at a time, irrespective of the correlations between features. However, this fails to account for the correlations between different features (e.g., name of schools or organizations is related to socio-demographic features).**\\n\\nThe reviewer is correct in their understanding that we focus on single concept interventions, and that this can result in issues in the case of correlated concepts. We now mention this in the limitations section (Section 6 lines 528-530) and provide a more detailed discussion in Appendix G. In future work, we plan to examine multi-concept interventions as a way of addressing this.\\n\\nWe would like to make a clarifying comment about the challenge of correlated concepts. As explained in Appendix G, correlated concepts are primarily an issue when generating counterfactuals that involve *removing* a concept. If multiple concepts are correlated in the data used to train an LLM (e.g., an individual's race and an individual's name), then even when a single concept (e.g., race) is removed from the input question, an LLM may still infer it using the information provided by the other concepts (e.g., name). However, generating counterfactuals that involve *replacing* the value of a concept (e.g., changing an individual's race from Black to White) can help to resolve this issue. This is because in this case, the LLM can use the provided value (e.g., White) of the concept intervened on rather than inferring it based on the other concepts.\\n\\n**Q: Why did the authors select a linear correlation coefficient as opposed to a non-linear coefficient?**\\n\\nOur goal is to assess the alignment between the causal effects of concepts and the rate at which they are mentioned in the LLM\\u2019s explanations. Intuitively, if an LLM\\u2019s explanations are faithful, then the \\u201calignment\\u201d should be high. There are multiple metrics we could use to measure \\u201calignment,\\u201d and it is not clear which is best. Since we are the first work to perform a faithfulness analysis of this kind, we opt to use a linear coefficient for simplicity. However, we think it would be interesting to examine multiple alignment metrics in future work.\\n\\n**Q: In Figure 1 (right), we observe that different behavioral concepts end in different regions of the scatter plot (there are orange points in the top and orange points around EE in [-0.5, -1.5]. Is there any insight or pattern that justify why there are different clusters? Could it be that the model is less prone to use some concepts for specific demographics?**\\n\\nAs you observed, whereas the explanations given by GPT models consistently mention the behavior-related concepts (shown in orange), it appears that Claude sometimes mentions them (i.e., the high EE value cluster) and sometimes doesn\\u2019t (i.e., the low EE value cluster). When examining a subset of the Claude explanations manually, we found that they appear to follow one of two patterns: (1) they mention the behavior-related concepts as the reason for the decision or (2) they choose the \\u201cundetermined\\u201d answer choice and say that this is because it is not safe/ethical to answer the question (note: in this case, behavior concepts are not mentioned). It seems that pattern (1) corresponds to the first cluster you mentioned and pattern (2) corresponds to the second. We plan to add a discussion of this in the paper (after finding places to cut to make space for this).\"}", "{\"comment\": \"We thank the reviewer for their careful review of our work and their thoughtful feedback. We appreciate that they recognize that our work tackles a critical issue and that it provides a concrete solution to a difficult problem. We are grateful that the reviewer found our paper to be engaging and enjoyable to read.\", \"we_address_each_comment_below\": \"**It remains uncertain whether biases impact all types of reasoning tasks uniformly or if certain domains are more affected than others.**\\n\\nAlthough we did not investigate all types of reasoning tasks, we did choose to analyze two tasks that are substantially different from each other. The first, a social bias task, contains subjective questions that are designed to elicit stereotype-based reasoning. The second, a medical question answering task, contains objective questions that are intended to be answered with logical/fact-based reasoning. Whereas one might expect LLMs to produce unfaithful explanations on the first task, it is less clear what to expect on the second task. We think a notable contribution of our work is demonstrating that LLMs can produce unfaithful explanations on tasks with such stark differences. We see this as early evidence that unfaithfulness might be common across many different tasks, as opposed to just those that are clearly prone to bias. We think the recognition of this risk is important knowledge for the AI community \\u2013 and it is thanks to our method that we are aware of it. In future work, we would like to investigate the prevalence of faithfulness across tasks more extensively by applying our method to additional domains.\\n\\n**Minor comment Line 321 typo -> should be orange for behavior, red for identity**\\n\\nThank you for catching this. We have fixed it in the paper.\\n\\n**Q: Can a faithful explanation be biased? For example, suppose if both CE and EE are high for the example in Table 1, and if the model would have answered Male: 26% Female: 74%, with Explanation References: Traits/Skills: 15% Age: 0% Gender: 85%. This is a clear case of gender bias, but can we say the explanation is faithful here referring to Definition 2.3?**\\n\\nGood question. Yes, LLMs can be influenced by bias yet still produce faithful explanations. Your example is a nice illustration of that point \\u2013 if an LLM\\u2019s decisions are influenced by gender bias and its explanations admit this bias, then we consider the explanations to be faithful. This is captured by Definition 2.3: if both the CE and EE are high for gender compared to the other concepts, then this contributes to a high faithfulness score.\\n\\n**How can we understand the observations if the models memorized these benchmark datasets during their training stage? Does this imply that the model\\u2019s reasoning process is so vulnerable to bias that it can even disregard a previously memorized correct answer?**\\n\\nWe are not sure if the LLMs we study have seen the datasets we use in our experiments during training, but it is a possibility. As you allude to in your comment, in our experiments on the social bias task, we find that there are cases in which the LLM doesn\\u2019t provide the correct answer and instead selects the bias-aligned answer. If an LLM had seen these questions during training, then it seems like there are at least two reasons why the model might not consistently select the correct answer: (1) despite seeing these questions, it didn\\u2019t memorize them or (2) it has memorized them, but because of differences in the prompt we use vs the prompt used during training, our prompt doesn\\u2019t trigger the memorized response.\"}", "{\"title\": \"Writing clarify suggestions\", \"comment\": \"**Add information about when the experiments were run with each model.**\\n\\nCan you please clarify what you mean by this? Are you asking us to specify the dates on which we conducted our experiments? In the paper, we\\u2019ve included the specific releases of the LLM APIs that we use (e.g., gpt-4o-2024-05-13). Does this address your concern?\\n\\n**lines 321-323: you describe the colors for each of the concept categories. However there seems to be a mismatch between the category color in the image and the color described in text.**\\n\\nThank you for catching this. We have now fixed it in the paper.\"}", "{\"title\": \"Auxiliary LLM: Model Choice and Outputs\", \"comment\": \"We agree with the reviewer that several aspects regarding our use and validation of the auxiliary LLM were not clear in the original paper. Based on their feedback, we have (1) clarified our motivation for using GPT-4o as the LLM, (2) provided a random sample of example outputs (concepts, concept values, and counterfactuals), and (3) clarified our process for validating the quality of LLM outputs.\\n\\n**Q: One of the decisions in the paper is to use an auxiliary model to extract concepts, propose a list of alternative values for each concept. Why is this necessary and is there any assumption or requirement that helped settling for a GPT-4o as the auxiliary LLM? The authors could better motivate the selection of GPT-4o in their paper, perhaps by including a small human study comparing the effectiveness of different models in extracting concepts and creating the list of alternate values.**\\n\\nWe use an auxiliary LLM for these steps because they would be time-consuming to perform manually. Hence, automating this step is important for the utility and scalability of our method.\\n\\nWe agree that our motivation for choosing GPT-4o was not clear in the original version of the paper. We\\u2019ve now added a sentence motivating this choice (see Section 4.1 lines 280-282). We chose to use a GPT-based model as the auxiliary LLM because they have been used for this purpose in prior work. [1][2] use GPT-based models for counterfactual generation and find that they produce high-quality counterfactuals. We chose GPT-4o specifically because it is a state-of-the-art model.\\n\\nWe like the idea of including a small human study. However, given the time, IRB approval, and financial costs entailed, we do not anticipate being able to execute this in time for the rebuttal. We would like to pursue it in future work.\\n\\n[1] Gat, Yair, et al. \\\"Faithful explanations of black-box nlp models using llm-generated counterfactuals.\\\" arXiv preprint arXiv:2310.00603 (2023).\\n[2] Polyjuice: Generating Counterfactuals for Explaining, Evaluating, and Improving Models (Wu et al., ACL-IJCNLP 2021)\\n\\n**Q: In line 218, the authors mention the use of auxiliary LLM to \\u201clist distinct concepts in the context of x\\u201d. What kind of verifications were performed to ensure that the extracted concepts and their list of concepts were meaningful? The authors should consider adding a list of extracted concepts and their list of values to the Appendix. They should also consider adding more details about the validation (e.g., manual validation or llm-as-a-judge approach).**\\n\\nWe manually validated the extracted concepts and concept values to confirm that they are plausible, meaningful, and distinct. For BBQ, we reviewed all concepts and concept values (of which there are 134 total). For MedQA, we reviewed concepts and concept values for a random sample of 15 out of the 30 questions (161 concept/concept values total). We have now included a random sample of the concepts and concept values in Appendix D.2. The concepts/values for BBQ are in Table 18 and the concepts/values for MedQA are in Table 21.\\n\\n**Q: Similarly, to the two questions above, in line 224-225, the authors mention \\u201cto generate each counterfactual, we instruct [auxiliary LLM] to edit the question x by changing the value of [concept] ..., while keeping everything else the same\\u201d. However there seems to be no validation of this claim. Did the authors validate that the perturbed input x was indeed minimally distant from x? If not, the authors should consider including such analysis, perhaps by showing the minimum edit distance or n gram overlap between the modified and original inputs.**\\n\\nWe manually reviewed the counterfactuals to check that (1) the requested edit was made and (2) no other information in the question was altered. For BBQ, we reviewed all counterfactuals (268 total). For MedQA, we reviewed the counterfactuals for a random sample of 15 out of the 30 questions (161 counterfactuals total). We have now included a random sample of counterfactuals in Appendix D.2. For BBQ, the removal-based counterfactuals are in Table 19 and the replacement-based counterfactuals are in Table 20. For MedQA, all counterfactuals are removal-based and they are in Tables 22 and 23.\"}", "{\"title\": \"Clarification of experiment details\", \"comment\": \"We thank the reviewer for pointing out important missing details. We\\u2019ve now added them to the paper in Appendix D, which we refer to in the main body of the paper (see line 205). We state where each is below.\\n\\nTo ensure full reproducibility, we will release our code with the final version of our paper. We plan to spend time in the near future documenting it carefully. However, if reviewers are interested in seeing it now, we can share it in a ZIP file.\\n\\n**Important parameters of the experiments are not clear in the paper, which may affect reproducibility of the experiments. The authors could consider providing additional details about the exact number of perturbations generated for each example, the number of generations used to estimate P(Y|X) (during the definition of CE), the decoding hyperparameters (including the temperature and max number of tokens). Additional details should also be provided about how each response y is parsed \\u2013 this is particularly relevant given that the evaluated models are known to produce nuanced and semantically equivalent texts.**\", \"the_details_of_our_experimental_settings_are_now_included_in_the_following_places_in_the_paper\": [\"The exact number of perturbations generated for each example is in Table 17 (referred to in Appendix D.2).\", \"The number of number of generations used to estimate P(Y|X) is in Appendix D.3.\", \"The decoding parameters used for all auxiliary LLM steps are specified in Appendix D.1.\", \"The decoding parameters used for the primary LLMs (i.e., those we measure the faithfulness of) are in Appendix D.3.\", \"Details on the LLM response parsing for the auxiliary LLM steps are in Appendix D.1.\", \"Details on response parsing for the primary LLMs (i.e., those we measure the faithfulness of) are in Appendix D.3.\", \"**The authors should also consider including the prompt used to extract concepts and concept values in the Appendix.**\", \"We now include the prompts used for the auxiliary LLM steps in Appendix D.1.\", \"For concept extraction: we include the basic prompt template in Table 6, the prompt details for the social bias task in Table 7, and the prompt details for medical question answering in Table 8.\", \"For concept values extraction: we include the basic prompt template in Table 9, the prompt details for the social bias task in Table 10, and the details for medical question answering in Table 11.\"]}", "{\"comment\": \"We thank the reviewer for their careful review and for providing useful feedback and questions. We appreciate their positive comments regarding our precise definition of causal concept faithfulness and its utility for the explainability community.\", \"we_address_each_comment_below\": \"**The use of GPT4o to generate counterfactual inputs is not evaluated independently.**\\n\\nWe conducted a careful manual review of a large random sample of the counterfactual questions to validate their correctness. For each question, we checked that: (1) the requested edit was made and (2) no other information in the question was altered. For BBQ, we reviewed all counterfactuals (268 total). For MedQA, we reviewed the counterfactuals for a random sample of 15 out of the 30 questions (161 counterfactuals total). We have now included a random sample of counterfactuals in Appendix D.2. For BBQ, the removal-based counterfactuals are in Table 19 and the replacement-based counterfactuals are in Table 20. For MedQA, all counterfactuals are removal-based and they are in Tables 22 and 23.\\n\\nIn addition, we would like to point out that GPT-based models have been used in prior work for counterfactual generation and have been found to produce high-quality counterfactuals [1][2].\\n\\n[1] Gat, Yair, et al. \\\"Faithful explanations of black-box nlp models using llm-generated counterfactuals.\\\" arXiv preprint arXiv:2310.00603 (2023).\\n[2] Polyjuice: Generating Counterfactuals for Explaining, Evaluating, and Improving Models (Wu et al., ACL-IJCNLP 2021)\\n\\n**Q: In Figure 1, it appears as though the correlation between EE and CE would be significantly lower if done independently for each concept, and then averaged. My question is: is calculating faithfulness on a per-concept basis is possible with your method?**\\n\\nThis is an interesting observation. When you say \\u201cconcept,\\u201d we\\u2019re assuming you are referring to the concept categories (i.e., behavior, context, and identity) / colors in the plot. Please correct us if that is not the case. It is possible to calculate faithfulness on a per-concept category basis with our method. This can be done by computing the correlation for a subset of the concepts (e.g., just the \\u201cbehavior\\u201d concepts) at a time. We think this would be an interesting analysis to explore in future work.\\n\\n**And a related question, given that pearson correlation only measures to what extent points lie on a line, and not on the line y=x, is it the most appropriate metric for you use case, did you consider others?**\\n\\nThis is a good question. Though Pearson correlation is a commonly used metric, as you point out, there are other options. We chose Pearson correlation because it is simple and easily understood. We discuss the other metrics we considered below:\\n1. *Error metrics, such as mean squared error (MSE) and root mean squared error (RMSE)*. We decided not to use these metrics because the two scores that we compare have different scales: causal concept effects range from 0 to infinity (as they are based on KL divergence), whereas explanation implied effects range from 0 to 1. Consequently, we do not expect these two scores to have the exactly same values, which limits the applicability of these metrics.\\n2. *Rank Correlation Metrics (e.g., Spearman\\u2019s rho, Kendall\\u2019s tau)*. These metrics assess the similarity of the orderings of concepts when ranked by the two scores. One limitation is that they penalize all misrankings equally, regardless of the values of misranked items. This means that misranking two concepts with very similar concept effects receives the same penalty as misranking two concepts with a large difference in concept effects. In preliminary experiments using Kendall\\u2019s tau as the alignment metric, we found that this sometimes led to unintuitive results.\"}", "{\"summary\": \"The paper investigates \\u2018unfaithfulness\\u2019 of SOTA LLMs. In this paper, the authors introduce a new metric for measuring the faithfulness of a model, namely, causal concept faithfulness that not only quantifies but also reveals semantic patterns of unfaithfulness. To uncover these patterns, they put this method to the test on two tasks - a social bias task and a medical QA task to demonstrate how decisions made by the models change along with the provided explanations to justify the wrong decisions made by the model.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"I really enjoyed reading the paper! It addresses a highly relevant topic of faithfulness in the current landscape of AI. The writing is engaging and clear.\", \"one_of_its_standout_features_is_its_approach_to_a_critical_issue\": \"it offers a concrete and measurable method for assessing explanation faithfulness in large language models, an area that has been difficult to define in previous research. By introducing the concept of causal concept faithfulness, the authors provide a way to evaluate how \\\"honest\\\" a model's explanations are, while also revealing specific patterns of misleading explanations.\", \"weaknesses\": \"**High level comments**\\n1. It remains uncertain whether biases impact all types of reasoning tasks uniformly or if certain domains are more affected than others.\\n2. Moreover, the experiments do not specify how these findings may apply beyond classification tasks to biases that could affect other generative tasks.\\n\\n***Minor comment***\\nLine 321 typo -> should be orange for behavior, red for identity\", \"questions\": \"1. Can a faithful explanation be biased? For example, suppose if both CE and EE are high for the example in Table 1, and if the model would have answered Male: 26% Female: 74%, with Explanation References: Traits/Skills: 15% Age: 0% Gender: 85%. This is a clear case of gender bias, but can we say the explanation is faithful here referring to Definition 2.3?\\n2. How can we understand the observations if the models memorized these benchmark datasets during their training stage? Does this imply that the model\\u2019s reasoning process is so vulnerable to bias that it can even disregard a previously memorized correct answer?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
4ua4wyAQLm
Local Patterns Generalize Better for Novel Anomalies
[ "Yalong Jiang" ]
Video anomaly detection (VAD) aims to identify novel actions or events which are unseen during training. Existing mainstream VAD techniques typically focus on the global patterns with redundant details and struggle to generalize to unseen samples. In this paper, we propose a framework that identifies the local patterns which generalize to novel samples and models the dynamics of local patterns. The capability of extracting spatial local patterns is achieved through a two-stage process involving image-text alignment and cross-modality attention. Generalizable representations are built by focusing on semantically relevant components which can be recombined to capture the essence of novel anomalies, reducing unnecessary visual data variances. To enhance local patterns with temporal clues, we propose a State Machine Module (SMM) that utilizes earlier high-resolution textual tokens to guide the generation of precise captions for subsequent low-resolution observations. Furthermore, temporal motion estimation complements spatial local patterns to detect anomalies characterized by novel spatial distributions or distinctive dynamics. Extensive experiments on popular benchmark datasets demonstrate the achievement of state-of-the-art performance. Code is available at https://github.com/AllenYLJiang/Local-Patterns-Generalize-Better/.
[ "Global Patterns; Local Patterns; Image-Text Alignment Module; Cross-Modality Attention; Temporal Sentence Generation; State Machine Module" ]
Accept (Poster)
https://openreview.net/pdf?id=4ua4wyAQLm
https://openreview.net/forum?id=4ua4wyAQLm
ICLR.cc/2025/Conference
2025
{ "note_id": [ "vI5NFMsoxc", "n6jR9f5CMn", "iD8YpvLFNB", "hGZi80WA3n", "ZsJ0DjI1Ak", "Tz3kyd1boD", "TouMCplV42", "KVE8gj79KH", "I9J4RDQbWJ", "FCEPohnXL5", "DcJryKSuyE", "Bn14tX6mE0", "96vD79fq2w", "8C6L0a4MMZ" ], "note_type": [ "meta_review", "official_review", "decision", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1734684531276, 1730253526222, 1737523528241, 1732541327706, 1730645028563, 1732434264952, 1732432205971, 1730292906779, 1733056037395, 1732432956097, 1729004233139, 1732432585293, 1732432817308, 1732647089482 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission2742/Area_Chair_hNts" ], [ "ICLR.cc/2025/Conference/Submission2742/Reviewer_qaRU" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission2742/Authors" ], [ "ICLR.cc/2025/Conference/Submission2742/Reviewer_Yam7" ], [ "ICLR.cc/2025/Conference/Submission2742/Reviewer_mbj7" ], [ "ICLR.cc/2025/Conference/Submission2742/Authors" ], [ "ICLR.cc/2025/Conference/Submission2742/Reviewer_XFDv" ], [ "ICLR.cc/2025/Conference/Submission2742/Authors" ], [ "ICLR.cc/2025/Conference/Submission2742/Authors" ], [ "ICLR.cc/2025/Conference/Submission2742/Reviewer_mbj7" ], [ "ICLR.cc/2025/Conference/Submission2742/Authors" ], [ "ICLR.cc/2025/Conference/Submission2742/Authors" ], [ "ICLR.cc/2025/Conference/Submission2742/Authors" ] ], "structured_content_str": [ "{\"metareview\": \"# Summary and Recommendation for Acceptance\\n\\n---\\n\\n## **Strengths**\\n1. **Novel Contributions**:\\n - Proposes a novel framework for video anomaly detection (VAD) focusing on **local patterns** rather than global patterns, which improves generalization to unseen anomalies.\\n - Introduces a **two-stage process**:\\n - **ITAM**: Captures semantically meaningful local components.\\n - **CMAM**: Refines local patterns using cross-modality fusion.\\n - Enhances temporal reasoning with a **State Machine Module (SMM)**, addressing low-resolution scenarios by leveraging inter-frame dependencies.\\n\\n2. **Robust Experimental Validation**:\\n - Achieves state-of-the-art performance on well-established benchmark datasets.\\n - Ablation studies demonstrate the effectiveness of individual components, including ITAM, CMAM, and SMM.\\n\\n3. **Generalization Capabilities**:\\n - Local patterns remain robust across varying visual conditions (e.g., low resolution, occlusion, and viewpoint changes).\\n - The model recombines known components (e.g., arms, legs) to describe unseen actions without explicitly naming them.\\n\\n4. **Efficiency**:\\n - Offers competitive spatio-temporal complexity, achieving an average frame rate of 12 FPS, outperforming baseline methods in both speed and accuracy.\\n\\n5. **Community Contribution**:\\n - Code and detailed manuals provided to ensure reproducibility and ease of adoption.\\n\\n---\\n\\n## **Weaknesses**\\n1. **Complexity**:\\n - The framework includes multiple components (ITAM, CMAM, SMM) that require careful integration, making training and implementation challenging.\\n\\n2. **Heavy Reliance on Pre-trained Models**:\\n - Use of vision-language models (e.g., Qwen-VL-7B) raises concerns about whether performance improvements are mainly due to these external models.\\n\\n3. **Clarity and Presentation**:\\n - Initial version lacked clear motivation and step-by-step explanations for components like ITAM and CMAM.\\n - Fig. 1 in the initial submission was hand-drawn, reducing its effectiveness in illustrating local and global pattern distinctions.\\n\\n4. **Efficiency Analysis**:\\n - Earlier versions did not provide detailed runtime and efficiency analyses for individual modules.\\n\\n---\\n\\n## **Authors' Mitigation**\\n1. **Clarity and Motivation**:\\n - Revised the manuscript to provide step-by-step explanations for ITAM, CMAM, and SMM.\\n - Updated Fig. 1 with real feature heatmaps to visually distinguish local and global patterns.\\n - Added Algorithm 1 in Section 3.2 to detail the workflow of the two-stage process.\\n\\n2. **Ablation Studies**:\\n - Conducted extensive ablation studies to demonstrate that ITAM and CMAM can be trained with smaller models and perform well without pre-trained models.\\n - Validated the benefits of cross-modality fusion in CMAM and the necessity of SMM for temporal coherence.\\n\\n3. **Efficiency Analysis**:\\n - Added runtime analyses in Appendix H, demonstrating competitive performance with an average frame rate of 12 FPS using YOLO-v7 for object detection.\\n\\n4. **Generalization Improvements**:\\n - Highlighted how local patterns generalize better across domains by capturing semantically meaningful components like body joints.\\n - Addressed low-resolution scenarios by refining captions with SMM, ensuring consistent performance.\\n\\n5. **Framework Simplification**:\\n - Demonstrated that smaller models (e.g., YOLO-v7) achieve similar performance to larger models (e.g., Qwen-VL-7B), reducing dependency on heavy pre-trained models.\\n\\n---\\n\\n## **Remaining Weaknesses**\\n1. **Framework Complexity**:\\n - Despite clarifications, the overall framework remains complex, which may deter practitioners unfamiliar with advanced multimodal techniques.\\n\\n2. **Reliance on Vision-Language Models**:\\n - Although the authors addressed concerns through ablation studies, the use of pre-trained models for generating training labels introduces reliance that could be reduced further.\\n\\n3. **Broader Applicability**:\\n - The framework focuses on benchmark datasets but lacks validation in diverse real-world scenarios, such as highly cluttered environments or surveillance with significant occlusions.\\n\\n---\\n\\n## **Justification for Acceptance**\\nThis paper makes a significant contribution to video anomaly detection by introducing a novel approach that prioritizes local patterns, which are more generalizable to novel anomalies. The combination of ITAM, CMAM, and SMM provides a robust framework that effectively integrates spatial and temporal reasoning. The authors' revisions address most reviewer concerns, improving clarity, reducing reliance on pre-trained models, and demonstrating the efficiency of their approach.\\n\\nWhile some complexity remains, the paper's methodological innovation, strong experimental results, and community contributions (e.g., open-source code) outweigh these limitations. This work has the potential to advance the field of anomaly detection and inspire further research. I recommend acceptance.\", \"additional_comments_on_reviewer_discussion\": \"Please refer to details in the above section\"}", "{\"summary\": \"This paper introduces a novel framework for video anomaly detection (VAD) that prioritizes identifying local patterns over conventional global patterns. The authors contend that local patterns generalize better to novel anomalies that were not encountered during training. Their proposed approach follows a two-stage process involving image-text alignment and cross-modality attention to efficiently capture and model local patterns. Additionally, the framework includes a State Machine Module (SMM) to integrate temporal dynamics, enabling enhanced anomaly detection by leveraging both spatial and temporal cues. Experimental results show that this approach achieves state-of-the-art performance on well-established benchmark datasets.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. The proposed framework is well-structured, with thoughtfully implemented methods.\\n2. Experimental results confirm that the approach achieves state-of-the-art performance on established benchmark datasets for video anomaly detection. \\n3. The method focuses on fine-grained anomaly features and employs text-image alignment to effectively capture local patterns. \\n4. It incorporates Long-range Memory technology, specifically HiPPO, into video anomaly detection.\", \"weaknesses\": \"1. The core of the proposed SMM module comes from works like HiPPO, so it seems that the proposed SMM is directly applying these modules to the VAD task.\\n2. Both the Image-Text Alignment Module and Cross-Modality Attention Module are based on pre-existing techniques, which limits the methodological innovation.\\n3. Is the observed performance improvement attributed to the additional large vision-language models, such as Qwen-VL and BLIP2? The comparison may not be entirely fair. It would be beneficial if the authors could provide evidence or experimental results to clarify whether these powerful external models are the primary contributors to the performance gains.\\n4. The motivation for the work lacks clarity. How do the image-text alignment and cross-modality attention modules achieve \\u201cidentification of local patterns that are consistent across domains and generalize well\\u201d? Additionally, how do they contribute to \\u201cgeneralizing model representations to novel anomalies\\u201d?\\n5. Certain claims may require further validation, such as the statement: \\u201cthe complementary relation between visual and textual features remains underexplored.\\u201d\\n6. The paper lacks runtime and efficiency analysis. The code introduction is incomplete, and several experimental details are missing, such as the specific version and scale of Qwen-VL used.\", \"questions\": \"1. The Global and Local Pattern representations in Figure 1 are hand-drawn, which limits their reliability. Are there any real feature visualization images available instead? Using actual visualizations could better illustrate the motivation and effectiveness of the proposed method, particularly in showing whether it yields more distinguishable local patterns. Figure 1 alone does not provide enough information to convey the method\\u2019s motivation and impact.\\n2. Utilizing Qwen for cropping bounding box regions based on prompts could significantly impact efficiency.\\n3. Is the introduction of the Qwen-Chat model the primary source of performance improvement? My concern is that the proposed method incorporates numerous external models, and it remains unclear whether these additions are the main contributors to the observed performance gains.\\n4. Could smaller models be used to replace these large multimodal models? If so, would this result in a significant decrease in performance?\\n5. Could you provide statistical results on runtime and efficiency? Does the proposed method have a significant impact on operational efficiency?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"None\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"Explanations and revisions\", \"comment\": \"Dear Reviewers, thank you for your initial feedback on our submission. We have addressed your comments and provided detailed responses in our rebuttal. Please let us know if there are any additional points you\\u2019d like us to clarify before the discussion phase concludes. Your feedback is highly appreciated.\"}", "{\"summary\": \"The paper introduces a novel framework for video anomaly detection, aiming to improve generalization for detecting new, unseen anomalies by focusing on local patterns rather than global event patterns. Traditional video anomaly detection (VAD) methods often struggle with unseen anomalies, as they primarily analyze global patterns. This framework utilizes image-text alignment and cross-modality attention to identify and refine local patterns while enhancing them with temporal information. Core components include the Image-Text Alignment Module (ITAM), Cross-Modality Attention Module (CMAM), and State Machine Module (SMM). The proposed approach demonstrates superior performance on several benchmark datasets, suggesting it can generalize better to novel anomalies.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": [\"By using image-text alignment and cross-modality attention, this method successfully extracts local patterns that remain consistent across varying visual data, enhancing its ability to detect novel anomalies.\", \"The State Machine Module (SMM) and motion estimation integrate temporal clues, effectively strengthening the detection capabilities by including sequential information for more accurate anomaly detection.\", \"*By combining visual and textual features in identifying local patterns, the model benefits from enhanced robustness and accuracy across different visual domains.\"], \"weaknesses\": [\"The method relies on the detection effect of visual-linguistic modeling (VLM), whereas multi-object image processing may ignore contextual information and affect performance. The authors need to provide more analysis on the ablation of the foundational models.\", \"The need for multiple layers of modules (e.g., ITAM, CMAM, SMM) to work jointly results in a complex training process that consumes more time and resources. Please provide a comparison of the spatio-temporal complexity analysis with previous methods to demonstrate the practical effectiveness of the method.\", \"In low-resolution scenes, the generated text description loses detail information, which affects the anomaly detection effect.\"], \"questions\": [\"How to further improve the generalization of local patterns without relying on visual-linguistic models?\", \"How does the method ensure adaptability to low-resolution videos in different datasets and real-world application scenarios?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Official Review of Submission2742 by Reviewer mbj7\", \"comment\": \"Thanks for your reply and hard work, you have solved my doubts, and the paper has become more substantial. Considering the innovation and contribution to the field, I think this paper still cannot be raised to 8 points, and it is worthy of 6 points.\"}", "{\"title\": \"Explanations and Revisions\", \"comment\": \"Thank you for your comments.\\n\\n1. Difference from HiPPO\\n\\nThe purpose of SMM is to address the influences of low-resolution conditions that hinder precise captioning, as is shown by Section 3.4, Fig. 2(b) and Fig. 3 in rebuttal revision. SMM determines whether earlier high-resolution events are represented by later image tokens. We have provided the code for implementing SMM in the anonymous url, as well as hand-on manuals. It can be seen that SMM significantly differs from HiPPO. \\nDifferent from HiPPO which tackles single-modality 1-dimensional signals, the proposed SMM models the complex dependencies in high-dimensional multi-modal sequences by stacking up 3 state machines. The advantages of stacking up state machines is shown by ablation studies in Section 4.3 and Table 3. \\nIt can be seen that the SMM with stacked state machines outperforms a single state machine. \\n\\n2. Is the observed performance improvement attributed to the additional large vision-language models\\n\\nWe have enriched Section 4.3 and Appendix E with ablations on the structure of Image-Text Alignment Module (ITAM) in the rebuttal revision. Specifically, we have made significant changes to the structure and training data of ITAM without much influence on performance. It can be seen from Table 5 that the structure and data variations does not significantly influence performance as long as image-text alignment is conducted. ITAM can be trained using normal data and detect unseen anomalies. As a result, the core contribution is image-text alignment instead of pre-existing models. \\nIn terms of CMAM, the comparison between Setting 3, 4 and 5 in Table 3 illustrates the benefits of both modalities. The texts are generated from image tokens in our proposed cross-modality fusion. \\nAs is addressed in Section 3.1, Appendix D (Table 4), the YOLO detector contributes to the same accuracy as Qwen-VL based detector. \\n\\n3. The motivation for the work lacks clarity \\n\\nWe have augmented Fig. 1, Section 3.2, 3.3 and 3.4 with clear motivations. Texts describe generic movement attributes (e.g., \\\"A man is walking with swinging arms and moving legs\\\"). When encountering an unseen action, such as running, the model can recombine known components like arms and legs to generate descriptive language that captures the essence of the action without explicitly naming it. As illustrated in Fig. 1 and Fig. 10.\\n\\n4. Certain claims may require further validation\\n\\nThe claim has been removed. In Stage 2 of the framework, image tokens and text tokens are combined in CMAM. Table 3 now includes Setting 3, 4 and 5 to highlight the necessity of both modalities. Besides, Fig. 5 shows that combing visual and textual features outperforms using either modality alone.\\n\\n5. Runtime and efficiency analysis\\n\\nWe tried both Qwen-VL-7B and YOLO-v7 as options for object detection. As is shown by Table 4 in Appendix D, both detectors achieve similar accuracy. Appendix H, Table 6 and Table 7 are added to detail the inference times of all components in the framework. With all components considered, the method achieves an average frame rate of 12 FPS. \\n\\n6. The Global and Local Pattern representations in Figure 1 are hand-drawn\\n\\nWe have carefully revised Fig. 1 in the rebuttal revision. Currently, global and local patterns are visualized using real feature heatmaps. As shown in Fig. 1, local patterns capture semantically meaningful components such as body joints. \\n\\n7. Smaller models replace large ones, Qwen has low efficiency \\n\\nWe tried both Qwen-VL-7B and YOLO-v7 as options for object detection. As is shown by Table 4 and 6 in the Appendix, both detectors achieve similar accuracy. YOLO is faster than Qwen-VL-7B. \\nIn terms of BLIP-2, we have enriched Section 4.3 and added Appendix E with ablations on the structure of ITAM. Table 5 shows that the structure and data variations does not significantly influence performance. ITAM can learn from normal data and detect anomalies.\\n\\n8. Primary source of performance improvement\\n\\nWe have enriched Section 4.3 and added Appendix E for ablation studies. Our first contribution is the two-stage scheme for identifying local patterns. The first stage is image-text alignment with motivation being clarified in the revised Section 3.2. Appendix E shows that the structure and training data of ITAM do not significantly influence model performance. The contribution of the second stage is presented by the ablations on Cross-Modality Attention. \\nThe second contribution is SMM which combines the image features from current moment with the text features from the previous moment in augmenting the descriptions about images. The ablation studies on SMM\\u2019s structure are detailed in Section 4.3. \\nQwen model is only leveraged when generating the training labels for SMM, it does not influence inference speed, according to Section 3.4. Setting 7 in Table 5 shows that if SMM is trained using the captioning labels from dataset and without requiring Qwen-Chat, performance is not influenced.\"}", "{\"summary\": \"This paper proposes a novel framework for video anomaly detection (VAD) that focuses on identifying local patterns to better generalize to unseen anomalies. The framework employs a two-stage process: first, it uses image-text alignment to locate local patterns that are consistent across visual data variances; second, it refines these patterns using cross-modality attention. To further enhance the model, the authors introduce temporal clues through a State Machine Module (SMM) and temporal motion estimation.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The proposed two-stage framework for identifying local patterns is novel and well-motivated. The use of image-text alignment and cross-modality attention is interesting and potentially useful.\", \"weaknesses\": \"1) The paper lacks a clear discussion of the computational complexity of the proposed framework. Given the use of large language models (LLMs) and other complex modules, it is important to address the efficiency of the approach.\\n2) What's the role of State Machine Module (SMM) in temporal sentence generation, there need more detailed explanation of the SMM and its role.\\n3) How does the proposed method handle situations with significant occlusions or viewpoint changes, which are common in real-world surveillance videos?\\n4) The two-stage process for extracting spatial local patterns using image-text alignment and cross-modality attention is not explained in enough detail. The paper lacks a clear, step-by-step explanation of how these complex processes work.\", \"questions\": \"The paper mentions limitations related to the reliance on VLM-based object detectors. How can this limitation be addressed in future work?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Clarification on concerns\", \"comment\": \"Dear Reviewers and Area Chairs,\\n\\nI would like to express my gratitude for your time and feedback. We have carefully addressed the concerns raised by the reviewers in our previous responses. \\n\\nIn terms of novely, the structure of the proposed SMM significantly differs from HiPPO. We have provided the code in our submission, which clearly demonstrates the differences between our approach and the referenced work. Besides, we have innovatively discovered the local patterns that can be recombined to capture the essence of novel anomalies, this is implemented by the two-stage process with Image-Text Alignment and Cross-Modality Attention to gradually identifying the generalizable local patterns. Ablation studies show that our method also works with smaller models and do not necessarily rely on pre-existing large models. \\n\\nIn terms of efficiency concerns, we have provided spatio-temporal complexity analysis and comparisons with previous methods, and we have two choices of object detectors one of which has a much higher efficiency than VLM. The efficiency of the approach exceeds baseline LLM-based pproach. \\n\\nWe have clarified the motivations and implementations of image-text alignment, cross-modality attention and temporal sentence generation with stacked state machines. \\n\\nIn cases of low-resolution scenes, occlusions or viewpoint changes, our proposed local patterns capture semantically meaningful features such as body joints which are consistent across the variations. The proposed SMM determines whether earlier high-resolution events are represented by later image tokens, it leverages inter-frame dependencies to deal with variations. In the concerns about the ignorance of contextual information, we have actually expanded bounding boxes and facilitating the involvement of relative contexts in analyzing subjects, benefiting performance. \\n\\nWe would greatly appreciate any further feedback or clarification requests from the reviewers. If there are any additional questions, we are happy to provide further details.\\n\\nThank you once again for your attention and consideration.\\n\\nBest regards\"}", "{\"title\": \"Explanations and revisions\", \"comment\": \"Thank you for your constructive comments which have helped us greatly in revising our manuscript.\\n\\n1. Motivation for each part of the method\\n\\nFirstly, we have clarified the purpose of the framework in the refined Fig. 1 in rebuttal revision. Global patterns retain redundant visual details, whereas local patterns, learned via image-text alignment, capture semantically meaningful components such as body joints. This alignment with textual descriptions enables local patterns to focus on relevant semantic elements, facilitating the comparison of spatial distributions of local patterns across images for identifying anomalies. \\nSecondly, we have clarified the purpose of the proposed two-stage scheme, including ITAM and CMAM, in Section 3.2 and 3.3. Image-text alignment in Stage 1 facilitates the identification of local components. When encountering an unseen action, such as running, the model can recombine known components like arms and legs to generate descriptive language that captures the essence of the action without explicitly naming it. As illustrated in Fig. 1. The generalizable components are shared by normal and novel abnormal events. \\nWe have added experiments in \\u201cSection 4.3 Ablation Studies\\u201d and Appendix E with ablations on ITAM\\u2019s structure. As is illustrated in Setting 1, 2, 3 and 4 in Table 5, the structure and training data for ITAM do not significantly influence performance. The core contribution is image-text alignment. We have also enriched the ablations on CMAM in Setting 3, 4 and 5 in Table 3. It can be seen that cross-modality feature fusion outperforms either single modality even though the texts are generated from image tokens. In Setting 2 of Table 5, ITAM is trained using only normal samples and achieves good performance. As a result, image-text alignment is capable of generalizing to novel samples.\\nThirdly, we have clarified the motivation of SMM in TSGM in Section 3.4. In some cases, a person walks from near to far, causing the resolution of the person to gradually decrease and leading to inaccurate captions. Therefore, SMM in TSGM plays a vital role in determining whether earlier events captured in high-resolution moments are still represented by the later image tokens, it captures inter-frame dependencies and refines sentence coherence. Setting 6 and 7 in Table 3 shows the benefits of SMM. Section 4.3 also ablates on the structure of SMM. \\nThe abstract and introduction are also updated accordingly.\\n\\n2. The efficiency of the model is worth discussing \\n\\nWe have added Appendix H with Table 6 and Table 7 for showing operational efficiency, including the runtime of each module and spatio-temporal complexity. In comparison with baseline LLM-based approaches, the proposed approach shows advantages in both accuracy and speed. \\nWe have uploaded the code for implementing the proposed approach in the url at the end of Abstract. Currently, more details as well as hand-on manuals are provided, including the Backbone, ITAM, CMAM, TSGM, SMM and so on.\\n\\n3. Additional information\", \"the_core_contributions_of_this_paper_come_in_the_following_ways\": \"(1)\\tAn approach is proposed to represent novel anomalies using local patterns. Local patterns capture semantically meaningful components such as body joints and are consistent across domains and generalize well, reducing the redundant visual details in global patterns. We have refined Fig. 1, the first paragraphs in Section 3.2, 3.3, 3.4 and added Appendix I to illustrate this motivation. \\n(2)\\tA two-stage process with image-text alignment and cross-modality attention is proposed for identifying the local patterns. The ablation studies on ITAM\\u2019s structure and training data are detailed in Appendix E. ITAM can be trained using normal data and detect unseen anomalies. The primary contributor to generalization is image-text alignment instead of external pre-trained models. Besides, the texts are generated from image tokens in the proposed scheme for cross-modality fusion.\\n(3)\\tThe generation of captions is influenced by visual data variances such as low resolutions. As is shown in Fig. 2(b) and Section 3.4. Therefore, SMM determines whether earlier high-resolution events are represented by later image tokens. It captures inter-frame dependencies and refines sentence coherence. SMM uses earlier text tokens to generate precise captions for low-resolution observations. Ablation studies in Section 4.3 show that the proposed SMM achieves this goal by stacking state machines, outperforming single state machines.\"}", "{\"summary\": \"This article is about video anomaly detection (VAD). This paper proposes a framework for recognizing local patterns, which can be generalized to new samples and dynamic modeling of local patterns. This paper proposes image-text alignment and cross-modal attention. Generalizable representations are built by focusing on textual information features that filter out unnecessary differences in visual data. In addition, time motion estimation complements spatial local models to detect anomalies characterized by new spatial distributions or unique dynamics. A large number of experiments have verified the effectiveness.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"1. The two-stage training method of this paper is reasonable. And allow for more fine-grained local features.\\n2. This paper gives a lot of visualizations to make it easier to understand the specific content.\\n3. The paper has achieved good performance, and the ablation experiment is given.\", \"weaknesses\": \"1. Not giving motivation for each part of the method. In my opinion, a good paper should give a specific reason and then introduce the method.\\n2. The efficiency of the model is worth discussing. You have proposed a lot of model modules. How much more reasoning time will they add to the network?\", \"questions\": \"My understanding of this field is unprofessional. So I will further follow the opinions of other reviewers.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Explanations and revisions\", \"comment\": \"Thank you for your constructive comments which have helped us greatly in revising our manuscript.\\n\\n1. Clear discussion of the computational complexity \\n\\nAppendix H, Table 6 and Table 7 are added in rebuttal revision to detail the inference times of all components in the proposed framework. With all components considered, the proposed method achieves an average frame rate of 12 FPS, achieving advantages in both accuracy and speed.\\nWe have uploaded the code for implementing the proposed approach in the url at the end of Abstract. Currently, more details as well as hand-on manuals are provided, including the Backbone, ITAM, CMAM, TSGM, SMM and so on. \\nWe tried both Qwen-VL-7B and YOLO-v7 as options for object detection. As is shown by Table 4 in Appendix D, both detectors achieve similar accuracy. In terms of inference speed, the YOLO detector processes each frame in 1.5 milliseconds, whereas Qwen-VL-7B requires 5.2 seconds per frame. Consequently, we evaluate the operational efficiency of the proposed framework using the YOLO detector. \\n\\n2. Role of State Machine Module (SMM) in temporal sentence generation\\n\\nWe have carefully revised Fig. 2(b), Fig. 3 and Section 3.4 to detail SMM and its role. Stage 2 of the framework addresses the visual data variances, such as low resolutions, that influence the generation of captions. As is shown in Fig. 2(b), the module for image-grounded text generation only provides a coarse caption \\\"A man is walking\\\" on later observations because of low-resolutions, it is not as precise as earlier captions \\\"A man is pushing a stroller on the street\\\" even if they actually describe the same event. \\nTherefore, SMM in TSGM plays a vital role in determining whether earlier events are still represented by the later image tokens, it captures inter-frame dependencies and refines sentence coherence. \\nSpecifically, SMM takes in the concatenation of image tokens at t with the text tokens of the sentence generated at t-1, it determines whether the event described by the sentence still resides in the image tokens, and returns \\u201cyes\\u201d or \\u201cno\\u201d.\\nIn terms of SMM\\u2019s structure, we have added ablation studies on SMM\\u2019s structure in Section 4.3 of the rebuttal revision. We have uploaded the code for training and inference with SMM in the url at the end of Abstract. \\n\\n3. Handle situations with significant occlusions or viewpoint changes\\n\\nFirstly, Fig. 10 is added to Appendix I of rebuttal revision, the figure shows some examples of the local patterns identified by image-text alignment and cross-modality attention. The local patterns capture semantically meaningful features such as body joints which are consistent across the variations. The compact representations ignore redundant details and contribute to generalizable embeddings. \\n\\n4. Explanations about two-stage process for extracting spatial local patterns \\n\\nFirstly, we have enriched Section 3.2 in rebuttal revision for better illustration. Algorithm 1 is added in Section 3.2 to detail the workflow. Stage 1 identifies the image tokens using image-text alignment, Stage 2 further refines and captures local patterns using cross-modality attention. The processes identify the semantically meaningful components in each image region that align with texts. Specifically, the captions used to train the model describe generic movement attributes (e.g., \\\"A man is walking with swinging arms and moving legs\\\"). When encountering an unseen action, such as running, the model can recombine known components like arms and legs to generate descriptive language that captures the essence of the action without explicitly naming it. As illustrated in Fig. 1.\\n\\n5. Limitations related to the reliance on VLM-based object detectors \\n\\nWe tried both Qwen-VL-7B and YOLO-v7 as options for object detection. As is shown by Table 4 in Appendix D, both detectors achieve similar accuracy. In terms of inference speed, the YOLO detector processes each frame in 1.5 milliseconds, whereas Qwen-VL-7B requires 5.2 seconds per frame. Consequently, we use the YOLO detector in the framework. \\nIn the future, we will explore the integration of object detectors in an end-to-end large model. In simpler scenes with few objects, the whole input image is embedded with fewer vision tokens. As the scenes become more complex, more objects are involved, then the input image is encoded with an increased number of vision tokens each of which describes one or more objects. \\nThis has been added to Appendix J.\"}", "{\"title\": \"Explanations and revisions\", \"comment\": \"Thank you for your constructive comments which have helped us greatly in revising our manuscript.\\n\\n1. More analysis on the ablation of the foundational models. Multi-object image processing may ignore contextual information and affect performance\\n\\nAs is shown by the implementation details in Section 4.1 in rebuttal revision, we have expanded each bounding box horizontally and vertically by 50% on both sides. The benefits of box expansion are shown in Table 4 of Appendix D, the comparisons between the settings with and without bounding box expansions show that bounding box expansions contribute to capturing more contextual information.\\n\\n2. Comparison of the spatio-temporal complexity analysis with previous methods\\n\\nWe have added Appendix H in the rebuttal revision for showing operational efficiency. In comparison with the baseline LLM-based approach, the proposed approach shows advantages in both accuracy and speed. \\nWe have uploaded the code for implementing the proposed approach in the url at the end of Abstract. Currently, more details as well as hand-on manuals are provided, including the Backbone, ITAM, CMAM, TSGM, SMM and so on. \\n\\n3. Adaptability to low-resolution videos \\n\\nFirstly, we have added Fig. 10 to Appendix I of rebuttal revision, visualizing the local patterns and generated captions under low resolutions, occlusions and viewpoint changes. It can be seen that TSGM generates accurate captions and captures semantically meaningful features such as body joints, in a similar fashion as in high-resolution images. \\nSecondly, we have clarified the purpose of SMM in Section 3.4. In some cases, a person walks from near to far, causing the resolution of the person to gradually decrease, leading to inaccurate captions. Therefore, SMM in TSGM plays a vital role in determining whether earlier events captured in high-resolution moments are still represented by the later image tokens, it captures inter-frame dependencies and refines sentence coherence. \\n\\n4. Further improve the generalization of local patterns without relying on visual-linguistic models\\n\\nFirstly, we have clarified the purpose in Fig. 1. As shown in Fig. 1, global patterns retain redundant visual details, whereas local patterns, learned via image-text alignment, capture semantically meaningful components such as body joints. This alignment with textual descriptions enables visual local patterns to focus on relevant semantic elements, facilitating the comparison of spatial distributions of local patterns across images for identifying anomalies. \\nSecondly, we have clarified the purpose of the proposed two-stage scheme in Section 3.2. The model learns to decompose global patterns into semantically meaningful local patterns. When encountering an unseen action, such as running, the model can recombine known components like arms and legs to generate descriptions that capture the essence of the action without explicitly naming it. \\nAs a result, our motivation is to replace global patterns which include redundant details with semantic local patterns which correspond to generalizable components. Normal and novel abnormal events share the components. If without visual-linguistic models, we can try another way to identify the components as local patterns, such as using graph representations.\"}", "{\"title\": \"Further explanations and revisions\", \"comment\": \"Dear everyone, thank you for your initial feedback on our submission. We have further addressed the comments and refined the responses in our rebuttal, also we have uploaded the revised paper & code. Please let us know if there are any additional points you\\u2019d like us to clarify before the discussion phase concludes. Your feedback is highly appreciated.\"}" ] }
4tiTQ33sDH
Unlocking the Power of GANs in Non-Autoregressive Text Generation
[ "Da Ren", "Yi Cai", "Qing Li" ]
Generative Adversarial Networks (GANs) have been studied in text generation to tackle the exposure bias problem. Despite their remarkable development, they adopt autoregressive structures so suffering from high latency in both training and inference stages. Although GANs have potential to support efficient generation by adopting non-autoregressive (NAR) structures, their explorations in NAR models are extremely limited. In this work, we conduct pioneering study of building language GANs based on NAR structures. We identify two issues that constrain the performance of GAN-based NAR models. Firstly, existing methods of incorporating latent variables provide highly similar representations which cannot describe the diversity of different words in sentences. We tackle this problem by proposing Position-Aware Self-Modulation, providing more diverse and effective representations. Secondly, the attention mechanism in Transformer cannot accurately build word dependencies in the unstable training of GANs, and we adopt Dependency Feed Forward Network to enhance the model capacity in dependency modeling. Armed with these two facilities, we propose a GAN-based NAR model, Adversarial Non-autoregressive Transformer (ANT). The experimental results demonstrate that ANT can achieve comparable performance with mainstream models in a single forward pass and has great potential in various applications like latent interpolation and semi-supervised learning.
[ "Language GANs", "Non-Autoregressive Model", "Text Generation" ]
https://openreview.net/pdf?id=4tiTQ33sDH
https://openreview.net/forum?id=4tiTQ33sDH
ICLR.cc/2025/Conference
2025
{ "note_id": [ "qJ1gswmTm5", "lUvjdoCpcZ", "fQ4EjrfEAp", "aPAt9Jr5Ie", "3p6UkwAiiH" ], "note_type": [ "official_review", "official_review", "official_review", "comment", "official_review" ], "note_created": [ 1730797547057, 1730707099693, 1730603760516, 1731867107487, 1730258105691 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission7643/Reviewer_c2z6" ], [ "ICLR.cc/2025/Conference/Submission7643/Reviewer_Lecz" ], [ "ICLR.cc/2025/Conference/Submission7643/Reviewer_1HgL" ], [ "ICLR.cc/2025/Conference/Submission7643/Authors" ], [ "ICLR.cc/2025/Conference/Submission7643/Reviewer_rhky" ] ], "structured_content_str": [ "{\"summary\": \"The paper introduces a novel model called Adversarial Non-autoregressive Transformer (ANT) aimed at enhancing the efficiency and performance of Generative Adversarial Networks (GANs) in text generation. Unlike conventional GANs that rely on autoregressive (AR) structures, ANT leverages a non-autoregressive (NAR) framework, allowing for parallel computation and significantly reducing latency in both training and inference. Key contributions include the introduction of Position-Aware Self-Modulation to enhance representation diversity and Dependency Feed Forward Network (Dependency FFN) to improve dependency modeling. Experimental results show ANT's competitive performance with AR models in terms of quality while achieving lower latency.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. This work is pioneering in applying GANs within a non-autoregressive structure for text generation, presenting novel solutions like Position-Aware Self-Modulation and Dependency FFN to tackle inherent limitations in GAN-based text generation.\", \"weaknesses\": \"1. The datasets chosen for experimental validation are relatively simple, lacking common tasks like translation and summarization, which weakens the persuasiveness of the results.\\n2. The issue described in line 57, \\\"the dynamic weight assignment process becomes unstable during the fragile training of GANs,\\\" lacks references, in-depth analysis, or detailed description of the phenomenon, making it difficult to thoroughly understand this problem.\", \"questions\": \"Please check the weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper explores the application of Generative Adversarial Networks (GANs) in non-autoregressive (NAR) text generation, addressing the limitations of existing GAN-based text generation models that typically rely on autoregressive structures. The authors identify two main issues with current NAR models: the lack of diversity in latent variable representations and the instability of attention mechanisms during GAN training. To tackle these problems, they introduce two useful techniques: Position-Aware Self-Modulation (PASM) and Dependency Feed Forward Network (DFFN). The experimental results demonstrate that ANT achieves comparable performance to the baselines.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The authors propose Position-Aware Self-Modulation (PASM), which provides more diverse and effective latent variable representations, enhancing the model's ability to capture the diversity of different words in sentences.\", \"To improve the dependency among the decoding procedure, the authors propose Dependency Feed Forward Network (DFFN), which can lead to better performance.\", \"The authors conduct extensive experiments to validate the effectiveness of their proposed model, comparing it against mainstream models and demonstrating its competitive performance.\"], \"weaknesses\": [\"The title of the paper is \\\"Unlocking the Power of GANs xxx.\\\" Generally, the strength of GANs lies in the training of the generator and discriminator through a game-theoretic mechanism. However, the main focus of this paper is not on GANs but rather on non-autoregressive text generation. I do not believe the power of GANs lies in non-autoregressive modeling.\", \"While the authors compare their model, ANT, to several state-of-the-art models, it appears they have selectively chosen only strong non-autoregressive baselines that utilize GANs. Other baseline methods, such as Huang et al. (ICML 2022), should also be discussed to strengthen the claims regarding the model's superiority.\", \"Additionally, the experiments are conducted primarily on specific tasks and datasets, which are somewhat outdated. It would be valuable to assess how well the model generalizes to other text generation tasks, such as summarization, dialogue generation, and long-form text generation.\"], \"questions\": \"refer to the comments\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces the Adversarial Non-autoregressive Transformer (ANT), a GAN-based model for efficient text generation. It proposes two main contributions: Position-Aware Self-Modulation and Dependency Feed Forward Network (Dependency FFN). The study claims that ANT achieves comparable performance to mainstream models with significantly lower latency.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The paper is well-structured and provides clear explanations of the proposed methods and their implications, making it easy to follow the authors' reasoning.\"], \"weaknesses\": \"1. The paper may lack a comprehensive comparison with state-of-the-art non-autoregressive models, which is crucial for establishing the significance of the proposed ANT model: Glancing Transformer (Qian et al, 2021), Fully-NAT (Gu et al. 2021), DA-Transformer (Huang et al. 2022), SUNDAE (Nikolay Savinov et al. 2022), etc.\\n2. While the paper claims that Position-Aware Self-Modulation enhances generation diversity, it appears to be a common practice to input identical [MASK] tokens plus positional embeddings in NAR models, which also achieve strong performance during decoding. The paper does not provide direct evidence to show that the similar representation approach hinders the model's generation ability or that Position-Aware Self-Modulation offers a significant improvement over this standard practice. This lack of evidence makes it difficult to assess the true impact of this contribution.\\n3. The Dependency Feed Forward Network is presented as a solution to the instability of word dependencies during GAN training. However, the provided evidence in Figure 5 shows only a marginal improvement, with the gap in FED not exceeding 0.001. Such a small difference raises questions about the practical significance of this improvement, especially considering the computational overhead it might introduce.\", \"questions\": \"1. Could the authors provide empirical evidence or further analysis to support the claim that Position-Aware Self-Modulation significantly improves generation diversity compared to standard practices in NAR models?\\n2. How does the Dependency Feed Forward Network provide a clear advantage over traditional FFNs in the context of GAN training, and what experimental results demonstrate this, beyond the marginal improvement shown in Figure 5?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"This work introduced the Adversarial Non-autoregressive Transformer (ANT), a pioneering study of building language GANs based on non-autoregressive (NAR) structures to address the exposure bias problem and reduce latency in training and inference. ANT tackles two key issues: the lack of diversity in latent variable representations by proposing Position-Aware Self-Modulation, and the inaccurate word dependency modeling in Transformers by adopting a Dependency Feed Forward Network. Experimental results show that ANT achieves performance comparable to mainstream models in a single forward pass, with promising applications in latent interpolation and semi-supervised learning.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"This work pioneers the development of language GANs based on non-autoregressive (NAR) structures, addressing the high latency issues inherent in autoregressive (AR) models. By generating all words in parallel, the Adversarial Non-autoregressive Transformer (ANT) achieves high-efficiency generation, significantly reducing both training and inference times.\\n\\nThe introduction of Position-Aware Self-Modulation and Dependency Feed-forward Network (Dependency FFN) addresses critical challenges in GAN-based NAR models. Position-Aware Self-Modulation enhances the diversity of hidden representations, leading to the generation of more varied and high-quality words in sentences. Dependency FFN improves the stability and accuracy of dependency modeling, resulting in more grammatically coherent outputs compared to traditional attention mechanisms.\", \"weaknesses\": \"The baselines selected for comparison in the paper are quite outdated, with the chosen non-autoregressive (NAR) models being from 2018, 2019, and 2021. Given the rapid advancements in the field of natural language processing (NLP), it is crucial to compare the proposed model against the most recent and state-of-the-art NAR models to provide a more accurate assessment of its performance. The absence of comparisons with the latest models raises concerns about the relative effectiveness and competitiveness of the proposed approach.\\n\\nThe experiments conducted in the paper are limited to the COCO and EMNLP datasets, which do not provide a comprehensive evaluation of the model's capabilities. To thoroughly assess the performance and robustness of the proposed NAR model, it is essential to test it on a wider range of datasets, including those for machine translation (e.g., WMT), natural language inference (e.g., SNLI), and text summarization. Evaluating the model on these additional datasets would offer valuable insights into its effectiveness across different NLP tasks, particularly in handling longer texts, which is a critical aspect of many real-world applications. The current dataset selection limits the generalizability and applicability of the findings.\\n\\nIf these aspects were addressed with more comprehensive experiments, it would significantly improve the evaluation and increase the overall score of the paper.\", \"questions\": \"See weaknesses section\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}" ] }
4sJJixGIZX
Online Continual Graph Learning
[ "Giovanni Donghi", "Luca Pasa", "Daniele Zambon", "Cesare Alippi", "Nicolò Navarin" ]
The aim of Continual Learning (CL) is to learn new tasks incrementally while avoiding catastrophic forgetting. Online Continual Learning (OCL) specifically focuses on learning efficiently from a continuous stream of data with shifting distribution. While recent studies explore Continual Learning on graphs exploiting Graph Neural Networks (GNNs), only few of them focus on a streaming setting. Many real-world graphs evolve over time and timely (online) predictions could be required. However, current approaches are not well aligned with the standard OCL literature, partly due to the lack of a clear definition of online continual learning on graphs. In this work, we propose a general formulation for online continual learning on graphs, emphasizing the efficiency of batch processing while accounting for graph topology, providing a grounded setting to analyze different methods. We present a set of benchmark datasets for online continual graph learning, together with the results of several methods in CL literature, adapted to our setting. Additionally, we address the challenge of GNN memory usage, as considering multiple hops of neighborhood aggregation can require access to the entire growing graph, resulting in prohibitive costs for the setting. We thus propose solutions to maintain bounded complexity for efficient online learning.
[ "continual learning", "online learning", "graph neural network" ]
Reject
https://openreview.net/pdf?id=4sJJixGIZX
https://openreview.net/forum?id=4sJJixGIZX
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zXP30ksqeO", "z9clclInyy", "yGS9zwHE2Z", "mHQGDwTltF", "lypjaPX9l0", "kkWAVpQx00", "cc6gi6XpzU", "SinDPZjc3k", "ROlCHddB4E", "R9hlrVGnKF", "OdSSqsckHM", "NslOJZ5gvm", "Mte5ueaNNJ", "JgCcBm65zs", "HxkFzLh2jT", "A29EVi5QRa", "6lBLMl9IOI" ], "note_type": [ "official_review", "official_review", "official_review", "decision", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review" ], "note_created": [ 1730557958274, 1730306589443, 1729950001023, 1737524117998, 1732548143034, 1730825046465, 1732388772486, 1732513994292, 1732389254072, 1732448866732, 1732388593844, 1732388311327, 1732494651041, 1732845389156, 1732518146322, 1732548467796, 1734713154461 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission11334/Reviewer_zdzj" ], [ "ICLR.cc/2025/Conference/Submission11334/Reviewer_gFtT" ], [ "ICLR.cc/2025/Conference/Submission11334/Reviewer_n8Q1" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission11334/Authors" ], [ "ICLR.cc/2025/Conference/Submission11334/Reviewer_RugN" ], [ "ICLR.cc/2025/Conference/Submission11334/Authors" ], [ "ICLR.cc/2025/Conference/Submission11334/Reviewer_gFtT" ], [ "ICLR.cc/2025/Conference/Submission11334/Authors" ], [ "ICLR.cc/2025/Conference/Submission11334/Reviewer_zdzj" ], [ "ICLR.cc/2025/Conference/Submission11334/Authors" ], [ "ICLR.cc/2025/Conference/Submission11334/Authors" ], [ "ICLR.cc/2025/Conference/Submission11334/Reviewer_n8Q1" ], [ "ICLR.cc/2025/Conference/Submission11334/Reviewer_n8Q1" ], [ "ICLR.cc/2025/Conference/Submission11334/Reviewer_RugN" ], [ "ICLR.cc/2025/Conference/Submission11334/Authors" ], [ "ICLR.cc/2025/Conference/Submission11334/Area_Chair_CD5L" ] ], "structured_content_str": [ "{\"summary\": \"This paper aims to formulate the setting of online continual graph learning considering the efficiency of batch processing and graph topology, and proposes a set of benchmark datasets for online continual graph learning. Additionally, from the technical perspective, the authors address the challenge of GNN memory usage.\\n\\nWithin the context of online continual graph learning, the graphs are defined as a time series, in which the graph snapshot at each time stamp t contains the nodes and edges collected from the starting time till t. Each new snapshot is created when a new node is added. The newly attached information includes the new node, its neighbors, and the node features.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. Online continual graph learning has not been fully explored, and this work makes some contribution in this direction.\\n\\n2. Compared to existing continual graph learning works, this work adopts a more practical hyperparameter selection strategy that only use a few tasks.\", \"weaknesses\": \"1. The main weakness is the inconsistence between the proposed setting and the actual experiments. Although the paper described an online learning setting, but the task construction in experiments is still same as the continual graph learning setting with task boundaries. As mentioned in the paper, 'the graph grow with nodes from two new classes at a time', then the incremental manner is same as a normal class incremental learning instead of an online learning setting. I would recommend that the experiments should be consistent with the proposed setting,in which each new snapshot could contain one node or a mini-batch of new nodes, but not necessarily a new task containing new classes.\\n\\n2. The adopted baselines are a little bit out of date. Besides, only TWP is specially designed for graph data, while the others don't consider the graph structures. Admittedly the authors have discussed why some baselines are not adopted, but the mentioned ones are all proposed no later than 2021. Therefore, it is not convincing enough that the adopted methods can represent the state-of-the-art performance. I would recommend that the recent continual graph learning works proposed from 2022 to 2024 could be thoroughly investigated, discussed, and compared whenever appropriate.\", \"questions\": \"1. In the Mini-batching part of Section 3.1, what does it mean by 'L>1 is not in contrast with the growing mechanism of the graph'? I could understand what the authors want to express should be that the entire graph may be required for aggregating multi-hop information, but the writing here seems confusing.\\n\\n2. It is a little bit confusing whether the proposed strategy allows the model to access the entire graph that contains previous nodes. It is stated that the up-to-date graph Gt is stored in a Past Information Store (PIS) system, but only allow limited use of information from PIS. It is unclear what kind of usage is deemed as 'limited'. Additionally, it is also stated that the PIS is different from a 'eventual memory buffer'. This is also confusing. If the PIS contains the completely graph with all previous information, then what is the role of the 'eventual memory buffer', and why we still need such a buffer?\\n\\n3. In the 'training details' part in the experiment section, when talking about the batch size, how is each batch used? Given a new task with N data, will the model be trained on the N data for several epochs, and in each epoch, the batches are fed into the model sequentially?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper introduces the Online Continual Graph Learning (OCGL) framework to handle non-stationary streaming data in graph structures. It benchmarks several continual learning methods on four datasets, adapting them for the online graph learning scenario. The authors propose a neighborhood sampling strategy to address the issue of neighborhood expansion in Graph Neural Networks (GNNs) and reduce computational complexity.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The introduction of the Online Continual Graph Learning (OCGL) framework extends continual learning to dynamic, graph-structured data.\\n\\n2. The paper provides a thorough evaluation of multiple continual learning methods, adapting them for online graph learning.\\n\\n3. The proposed neighborhood sampling strategy effectively addresses the computational and memory challenges of multi-hop neighborhood aggregation in GNNs.\", \"weaknesses\": \"1. The benchmarks focus mainly on node classification tasks, and extending the framework to more diverse graph-based applications (e.g., edge prediction, link prediction) could strengthen the paper's contributions.\\n\\n2. The paper primarily compares traditional continual learning methods adapted for the Online Continual Graph Learning (OCGL) framework. It does not include comparisons with more recent state-of-the-art continual graph learning methods proposed in the recent three years, such as MSCGL[1] and UGCL[2].\\n\\n [1] J. Cai, X. Wang, C. Guan, Y. Tang, J. Xu, B. Zhong, and W. Zhu, ''Multimodal continual graph learning with neural architecture search,'' in Proceedings of the ACM Web Conference, 2022, pp.1292\\u20131300.\\n\\n [2] T. D. Hoang, D. V. Tung, D.-H. Nguyen, B.-S. Nguyen, H. H.Nguyen, and H. Le, ''Universal graph continual learning,'' Transactions on Machine Learning Research, 2023.\\n\\n3. While the sampling strategy improves computational efficiency, it can negatively impact model accuracy. \\n\\n4. The paper predominantly concentrates on experimental evaluation and lacks an in-depth theoretical analysis of the proposed method's properties, such as convergence, computational complexity, and theoretical bounds on forgetting.\", \"questions\": \"See weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper introduces an Online Continual Graph Learning (OCGL) framework designed for learning in dynamic graph environments where data arrives in a streaming fashion. The authors address challenges of catastrophic forgetting in graph-based continual learning, particularly when working with graph neural networks (GNNs) that rely on neighborhood information, which can lead to high memory and computational costs. The proposed OCGL framework is evaluated on four node classification datasets, using modified continual learning methods suited for online learning. They propose neighborhood sampling as a strategy to address neighborhood expansion challenges, which could otherwise lead to prohibitive costs in dynamic graph settings.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": [\"the studied problem is an important research question\", \"I like the attempts that the author tried to take a more systematic approach toward the problem\"], \"weaknesses\": \"1. the technical content of the paper does not address the research question proposed by the authors. For example\\n a. one of the claimed distributions is the formulation of the so-called online GCL. However, I do not see any formal formulation of the problem. Only a generic description is provided. For example, to fulfil the claim, one would naturally expect to see the data model, the definition and format of the leaner, and the properties and requirements for an effective learner in this scenario. None of these information is provided.\\n b. another claim is that online GCL is a new learning paradigm and different from GCL. One would expect to see a detailed comparison between these two. How are they different exactly? How much is the difference?\\n\\n2. there are some statements that are not factually correct. For example, continual learning is inherently an online setting. An ideal continual learning algorithm should adaptively learn from the new data without the need to access previous data. However, this can be proved theoretically impossible. Therefore, many continual learning algorithms compromise by allowing partial access to historical data. Even for online systems, storing historical data is also allowed. Regarding the task boundary, there have been many studies that looked at the continual learning setting without a clear task boundary. These studies have been under the terms such as \\\"task-free continual learning\\\" and \\\"domain-free continual learning\\\"[1]. Furthermore, it is not clear what exactly task-boundary means in the paper.\\n\\n3. the proposed technique is standard and the paper has no conceivable novelty or contribution. The neighbourhood explosion problem is a standard issue in GNN training even for the case of training GNN on static graphs and neighborhood sampling has been de-facto in training GNN. The issue of changing graph structure in GCL has been documented and studied in [2].\\n\\n[1] \\\"Cglb: Benchmark tasks for continual graph learning.\\\" Advances in Neural Information Processing Systems 35 (2022): 13006-13021.\\n\\n[2] \\\"Towards robust graph incremental learning on evolving graphs.\\\" International Conference on Machine Learning. PMLR, 2023.\", \"questions\": \"see weakness\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"We greatly appreciate the reviewer\\u2019s follow-up. While it is true that the memory buffer is a subset of the information contained in the PIS, their difference is computational. While the PIS is allowed to grow to accommodate the graph stream, the memory buffer has to maintain constant dimension to be used by continual learning strategies to prevent forgetting. In some cases in practice the memory buffer can be implemented by a simple indexing and retrieval from the PIS, especially if the graph is small enough to fit in memory. Instead, if the graph becomes huge, which is a case we want to be prepared for in the OCGL setting, this becomes impossible, as retrieval from the PIS becomes less efficient, as it could be stored for example in a distributed database. Finally, there may be cases in which we only have a \\\"virtual\\\" PIS, where we do not have a storage of the graph but we may explore it when obtaining mini-batches (or if new nodes come with neighborhood information). Thus we believe it is best to keep the PIS and the memory buffer conceptually separate in our framework.\"}", "{\"summary\": \"The paper introduces a novel framework, OCGL, which addresses the challenges of applying continual learning principles to graph-structured data in an online setting. It innovatively formulates the problem of learning from a continuously evolving graph, emphasizing the necessity to manage computational complexity and avoid catastrophic forgetting\\u2014a common issue where a model loses previously learned information upon learning new data. To facilitate research in this area, the authors develop a benchmarking environment with tailored datasets to evaluate various continual learning methods adapted to graph settings. They also propose practical solutions such as neighborhood sampling to maintain computational efficiency as the graph grows.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. One of the paper's main strengths is the formal introduction of the Online Continual Graph Learning (OCGL) framework.\\n2. The authors develop a benchmarking environment specifically for OCGL, including multiple datasets and evaluations of various continual learning methods. \\n3. The experimental setup and the detailed analysis provided in the paper are thorough and well-constructed.\", \"weaknesses\": \"1. The proposed problem is novel, however, the detailed appliable scenario for such OCGL framework should be further explained, especially on graph data.\\n2. The baselines chosen in this paper are all Continual learning methods. More methods for the online learning setting should be included.\\n3. Also, as a benchmark paper, it would be beneficial to introduce more new datasets.\\n4. The contribution of this paper seems limited to me. The authors introduced a new problem setting OCGL for graph learning and presented a benchmarking environment for OCGL, but did not propose a novel method to solve this problem. Although I understand benchmarking papers are also important to the research community, I believe that the contribution in this case may not be significantly sufficient for inclusion in this conference.\\n5. The third contribution, using random sampling to address the complexity of multi-hop aggregation, is a very easy-to-get idea, and seems trivial to me.\", \"questions\": \"see above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We appreciate your thoughtful review and valuable feedback. Below, we provide point-by-point responses to address the weaknesses and questions raised while clarifying our paper's contributions and setting.\\n\\n1. Thank you for this suggestion. While our primary focus was on node classification due to its prominence in graph learning literature, we agree that extending the framework to tasks like edge prediction would significantly broaden its applicability. We will revise the conclusion to explicitly outline these extensions as important steps for future research.\\n2. Thank you for suggesting the inclusion of more recent methods for continual graph learning. While we initially focused on widely recognized baselines to ensure a robust comparison within the established continual learning literature, we agree that evaluating newer methods is essential to accurately reflect state-of-the-art performance. We thank you for the pointers to recent literature. However, most recent methods in the CGL literature violate the constraints of the OCGL setting either due to expensive update steps between tasks (such as a costly neural architecture search) or due to inefficient use of past data. As an example, the referenced \\\"Universal graph continual learning\\\" proposed a global structure distillation which requires to compute node embeddings for the entire graph, and a local structure distillation which still requires to compute the full embedding of buffer nodes and all their neighbors. Most of these methods therefore are not applicable in our setting. We have nonetheless identified in \\\"Sparsified Subgraph Memory\\\" [1] a suitable additional baseline, for which we have already launched experiments, whose results we will include in the final version of the paper.\\n3. The proposed simple solution of sampling to address the issue of neighborhood expansion is only a first step in tackling the issue. As we observe at the end of section 7, the (expected) negative impact on model accuracy indicates that further research is required to fully address the issue. Rather than a contribution by itself, the proposal of the sampling method serves to bring the attention to the specific issue of neighborhood expansion, which requires more consideration in this setting.\\n4. We appreciate this observation and agree that a sound theoretical analysis of convergence and bounds on forgetting adds value to continual learning papers. Nonetheless, unfortunately such an analysis is not straightforward for most methods, and is not provided even in the papers that introduce them. Such a detailed model analysis is therefore out of the scope of our work, which introduces a new, challenging setting for continual learning on graphs, rather that proposing a new method, or proving theoretical properties of existing ones.\\n\\n[1] Xikun Zhang, Dongjin Song, and Dacheng Tao, \\\"Sparsified Subgraph Memory for Continual Graph Representation Learning\\\", in 2022 IEEE International Conference on Data Mining (ICDM), 2022\"}", "{\"comment\": \"I appreciate the authors for their efforts and responses. The OCGL framework and benchmark represent valuable foundational steps for exploring online continual graph learning. However, as a foundational framework, the evaluation should be more comprehensive, encompassing diverse graph-based tasks beyond node classification, such as link prediction or graph classification, to demonstrate the framework\\u2019s broader applicability. Additionally, the current work relies on a limited set of datasets, which restricts the generalizability of the proposed benchmark.\\n\\nGiven these limitations and the lack of immediate impact demonstrated by the framework, the contribution appears less significant compared to other works at this conference. Thus, I will maintain my original assessment.\"}", "{\"comment\": \"We appreciate your thoughtful review and the detailed feedback on the paper. Below, we address each identified weakness while clarifying our paper's contributions.\\n\\n1. - a. The formulation of the OCGL framework is provided in section 3 of the paper. While we acknowledge that the formalism is limited, in section 3.1 the growing network is formally defined. Specific definitions of the learner are not provided as to accommodate multiple solutions, and the requirements for efficiency are explored in the considerations on mini-batching and neighborhood expansion. The objective we wanted to achieve with our framework was to maintain a general enough setting that can be applied in multiple scenarios, therefore a stringent formalism would have harmed the scope of the OCGL framework.\\n - b. We acknowledge that a detailed comparison between CGL and OCGL is not provided, yet they are compared at the end of section 2 and in section 3.1: the key difference between the two is that in CGL the streams consist of graph snapshots (generally subgraphs induced by the current task) on which models are trained offline with multiple passes (possibly also with batching), while in OCGL the stream consists of small mini-batches of nodes, which are not used for training after a new batch arrives. \\n2. We respectfully disagree with the reviewer's statement that continual learning is an inherently online setting: while we agree on the points on continual learning and that even for online systems storing historical data is also allowed, there is a fundamental difference between standard continual learning and online continual learning. While in standard continual learning training on each task is performed offline, with multiple passes over the data until convergence, the requirements of online learning involve a single pass over the data. Such a distinction is clearly identified in the literature [1,2]. We would thus kindly invite the reviewer to point out the specific statements in our manuscript which are deemed factually incorrect, so that we may correct them.\\n Regarding the task boundary, we agree that there have been studies that looked at online continual learning without them, yet this is missing in the CGL literature, as even the referenced paper only proposes it as future work, and the same authors identify it as an open research direction in their recent survey [3]. This further highlights the contribution of our work. As to the definition of task boundary, a clarifying sentence has been introduced in the manuscript.\\n3. We wish to clarify that the objective of our paper is not to propose novel techniques for continual learning on graphs, but rather our primary contribution is the introduction of the Online Continual Graph Learning setting, which has distinct peculiarities compared to standard CGL.\\n This contribution holds particular significance because, although certain studies in the CGL literature describe their frameworks as streaming, they often lack direct comparability with the OCL domain. Moreover, a clear and consistent framework like the one we introduce is absent. By establishing a well-defined problem space, identifying potential challenges, and providing a comprehensive benchmark, our work serves as a foundation for creating and evaluating new methods for efficient online continual graph learning.\\n While certainly the issue of changing graph structure has been documented in multiple surveys, we believe that it poses specific challenges associated with the online setting, in particular regarding mini-batching and neighborhood expansion. Similarly for neighborhood expansion and sampling, while this issue is not unique to our setting, here sampling is not simply a matter of efficiency, but a requirement of the online setting. We want to stress how our contribution does not lie in the introduction of new techniques (as we point out in section 3.2 that sampling is a simple solution for neighborhood expansion problem for scaling GNNs to large graphs), but the introduction of a previously ignored setting, alongside a consideration of the issue it presents and that need to be further addressed in future studies.\\n\\n[1] Zheda Mai, Ruiwen Li, Jihwan Jeong, David Quispe, Hyunwoo Kim, and Scott Sanner, \\\"Online continual learning in image classification: An empirical survey\\\", Neurocomputing, 2022\\n\\n[2] Albin Soutif\\u2013Cormerais, Antonio Carta, Andrea Cossu, Julio Hurtado, Vincenzo Lomonaco, Joost Van De Weijer, and Hamed Hemati, \\\"A Comprehensive Empirical Evaluation on Online Continual Learning\\\", in 2023 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW), 2023\\n\\n[3] Xikun Zhang, Dongjin Song, and Dacheng Tao, \\\"Continual Learning on Graphs: Challenges, Solutions, and Opportunities\\\", arXiv:2402.11565, 2024\"}", "{\"comment\": \"Thanks for the detailed response from the authors.\\n\\nPreviously, my major concern is on the online learning setting. After reading the explanation from the authors, I understand that it is indeed the online setting proposed at the beginning of the paper. Therefore I would like to increase my rating to 6.\\n\\nAdditionally, I would also encourage the authors to clarify the relationship between PIS and memory buffer. Although PIS is used to access the neighborhood while memory buffer is used to prevent forgetting, the information retained by these two mechanisms are same. i.e. PIS already contains all information of the memory buffer, and it seems that the memory buffer do not store additional information but could index and retrieve from the PIS.\"}", "{\"comment\": \"We thank you for the review. Below, we respond to the identified weaknesses and questions while addressing the concerns that were raised.\", \"weaknesses\": \"1. We believe that the first and main weakness identified by the reviewer stems from a misunderstanding of our experimental setting. We rewrote a sentence to make it clearer. Specifically, with \\\"the graph will gradually grow with nodes from two new classes at a time\\\" we mean that we stream mini-batches of nodes from two classes at a time. In the conventional incremental setting, the data (nodes) of a single task (nodes of two classes) would be available all at once, allowing for multiple passes until convergence. On the contrary, in our experiments, for each pair of classes multiple small mini-batches are processed one by one in an online fashion (without revisiting them later), before passing to the next pair of classes. Therefore our experiments follow exactly the proposed setting, with a class-incremental node stream definition.\\n2. We appreciate the suggestion to incorporate more recent methods for continual graph learning. However, please note that the most recent methods in the CGL literature violate the constraints of the OCGL setting either due to expensive update steps between tasks or due to inefficient use of past data. Most of these methods therefore are not applicable in our setting. While we selected widely recognized baselines to provide a robust comparison within the continual learning literature, we agree that newer methods should be evaluated to reflect state-of-the-art performance. For this purpose, in the final version of the paper we will include results for \\\"Sparsified Subgraph Memory\\\" [1], for which we have already launched the experiments.\", \"questions\": \"1. We thank you for bringing up this point of confusion, which certainly requires a rewording. We intended to emphasize that aggregating multi-hop information does not fundamentally conflict with the evolving nature of the graph. Yet, this necessitates storage of past information (see next point) and raises the problem of neighborhood expansion. \\n2. The purpose of a Past Information Storage is to have access to multiple hops of neighboring nodes for message passing when new ones arrive. In this context, the \\\"limited\\\" usage is defined in the preceding sentence with the requirement of bounded complexity for processing each mini-batch, to ensure efficiency in the presence of the neighborhood expansion issue. The role of the PIS is thus distinct from the memory buffer of replay methods: it serves as a \\\"database\\\" where even a huge graph could be stored, and its use is similar to what is done in traditional CGL when accessing inter-task edges. In such CGL settings, while not explicitly stated, it is de-facto used nonetheless as multiple hops of past nodes are required. An alternative, equivalent formulation would be to require that each node arrives equipped also with its l-hop neighborhood, or a subset of it. The memory buffer instead is a more limited storage of samples that is not used for the construction of neighboring information of mini-batch nodes, but it is used by the continual learning method to preventing forgetting. \\n3. As clarified in response to the first identified weakness, given a new task with N nodes, the model will be trained on N/batch\\\\_size mini-batches in an online fashion. We consider multiple passes only on each mini-batch before passing to the next. \\n\\n[1] Xikun Zhang, Dongjin Song, and Dacheng Tao, \\\"Sparsified Subgraph Memory for Continual Graph Representation Learning\\\", in 2022 IEEE International Conference on Data Mining (ICDM), 2022\"}", "{\"comment\": \"We thank the reviewer for the detailed comments. Here are point-by-point answers to the identified weaknesses:\\n1. We acknowledge the importance of elaborating on the scenarios where OCGL can be impactful. Specifically, OCGL can be used in all the applications of graph continual learning where quick model adaptation is required for timely predictions after distribution shifts. To address this we expanded the beginning of section 3 on the introduction of the OCGL framework for streaming graph data.\\n2. While all the baselines for the paper are Continual Learning methods, we note how A-GEM and MAS are in fact \\\"onlinable\\\" by design, as the learning protocol of A-GEM involves a single pass over the training stream and the parameter importance of MAS is updated in an online fashion. While additional baselines could be considered, we believe that the current ones are a representative selection of Continual Learning strategies. As pointed out also by reviewers zdzj and gFtT, we would rather expand our selection of baselines with a more recent GCL methods. In particular, in the final version of the paper we will include results for \\\"Sparsified Subgraph Memory\\\" [1], for which we have alredy launched the experiments.\\n3. We recognize the value of diversity in benchmarking datasets. While our study already spans four well-established graph datasets (CoraFull, Arxiv, Reddit, and Amazon Computer), we plan to expand this suite in future work to include additional graph datasets. The number of considered dataset is nonetheless in line with similar literature in GCL and OCL [2,3,4].\\n4. We wish to clarify that the primary aim of this work is to formalize the OCGL framework and provide a foundational benchmark. We believe this is an essential first step for further exploration and innovation in this field. By defining the problem space, analyzing possible problems and offering a robust benchmarking environment, our paper lays the groundwork for developing and testing novel algorithms. We believe that this contribution is particularly relevant as in the CGL literature there are some studies that refer to their setting as streaming, yet they are arguably not comparable to the OCL literature, and there is no well defined setting such as the one we propose with our work.\\n5. While random sampling might appear straightforward, we adopted this as a first, pragmatic solution to highlight the neighborhood expansion problem and to demonstrate its impact. We acknowledge this limitation and propose in the conclusion that future research should develop more sophisticated strategies tailored to OCGL. In addition, we believe that issue is in need to be raised not only in the online setting, but also in standard CGL, where some methods implicitly use past data thanks to multiple hops of message passing into past task nodes. We hope therefore that raising the issue can lead to more careful and efficient design strategies for GCL.\\n\\n[1] Xikun Zhang, Dongjin Song, and Dacheng Tao, \\\"Sparsified Subgraph Memory for Continual Graph\\nRepresentation Learning\\\", in 2022 IEEE International Conference on Data Mining (ICDM), 2022\\n\\n[2] Xikun Zhang, Dongjin Song, and Dacheng Tao, \\\"CGLB: Benchmark Tasks for Continual Graph\\nLearning\\\", Advances in Neural Information Processing Systems, 2022\\n\\n[3] Zheda Mai, Ruiwen Li, Jihwan Jeong, David Quispe, Hyunwoo Kim, and Scott Sanner, \\\"Online\", \"continual_learning_in_image_classification\": \"An empirical survey\\\", Neurocomputing, 2022\\n\\n[4] Albin Soutif\\u2013Cormerais, Antonio Carta, Andrea Cossu, Julio Hurtado, Vincenzo Lomonaco, Joost \\nVan De Weijer, and Hamed Hemati, \\\"A Comprehensive Empirical Evaluation on Online Continual\\nLearning\\\", in 2023 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW), 2023\"}", "{\"comment\": \"I thank the author for the diligent response. However, I still have not got clear answers to my concerns. For example, the new response persists the main contribution of the paper is the study of a new setting. I still fail to see what is the new setting, how it is different from the typical graph continual learning, and why this setting is important.\\n\\nAs such, I will maintain the original score.\"}", "{\"comment\": \"Thank you for the further response.\\nHowever, I still do not find sufficient/direct answers to my questions, as optimizing for memory/computation resources are also desired/standard objectives for CGL methods.\\n\\nI will suggest that in the future version of the paper, the author\\n\\n1. provide a formal side-by-side comparison with a quantitative description rather than a qualitative description between CGL and OCL (as this is one of the claimed main contributions)\\n\\n2. provide a concrete toy example of the proposed setting, illustrate how the task boundary changes different from CGL and explain why the existing CGL methods are not applicable/effective in this setting.\"}", "{\"comment\": \"I thank the author for the response, which has resolved some of my concerns. However, I still think the technical and experimental contributions are somewhat insufficient, so I will maintain my score.\"}", "{\"comment\": \"We thank the reviewer for their feedback and the opportunity to clarify the Online Continual Graph Learning (OCGL) setting. OCGL addresses a gap between the literature on Online Continual Learning (OCL) and Continual Graph Learning (CGL) by introducing a learning paradigm tailored to real-time graph data. In the camera-ready version of the manuscript, we will introduce a paragraph to specifically compare the OCGL with CGL. As mentioned in the problem formulation of section 3.1, the distinctions are as follows:\\n- Data stream and granularity: in CGL, the data stream typically consists of graph snapshots (e.g., subgraphs or tasks) available for offline training with multiple passes. In contrast, OCGL processes data as individual nodes or small mini-batches, requiring immediate updates and supporting only single-pass learning.\\n- Training methodology: standard CGL assumes offline settings with relaxed computational constraints. OCGL enforces stricter online constraints, limiting computational overhead and memory usage, and specifically requires bounded processing time for each mini-batch. This allows the model to adapt quickly to distribution changes, allowing to make anytime predictions.\\n- Task boundaries: CGL frequently relies on explicit task boundaries for training and evaluation, while OCGL operates in a task-free or boundary-agnostic manner. This makes OCGL suitable for dynamic, evolving graphs where distribution shifts occur without clear demarcations.\\n\\nThe OCGL framework thus addresses significant gaps in the existing literature by adapting online continual learning principles to dynamic graph-structured data, especially the computational efficiency which leads to our proposal of neighborhood sampling. This framework formalizes a novel setting that combines the challenges of evolving graph data with the stringent constraints of online learning, such as single-pass training, task-free learning, and limited memory and computational resources. By doing so, OCGL captures the demands of real-world scenarios, such as social networks or recommender systems, where data arrives incrementally, and predictions must be made in real time.\\n\\nOverall, the contributions of our work are twofold: establishing OCGL as a well-defined and practical problem space and creating the foundation for future research by identifying key challenges and providing the tools to address them.\"}", "{\"metareview\": \"This paper aims to address the challenges of applying continual learning principles to graph-structured data in an online setting. Reviewers agreed that this paper presents an interesting framework for online continual graph learning (OCGL) and also proposes novel ideas such as a neighborhood sampling strategy. However, reviewers raised concerns about insufficient justifications of the problem setting, insufficient baselines and datasets, the lack of theoretical analysis, etc. Although some of the concerns have been addressed during the rebuttal and discussion stages, reviewers and the AC still found the paper in its current version is not ready for publication at ICLR yet.\", \"additional_comments_on_reviewer_discussion\": \"Reviewers raised concerns about insufficient justifications of the problem setting, insufficient baselines and datasets, the lack of theoretical analysis, etc. The detailed responses have addressed some of these concerns. However, during the post-rebuttal discussions, reviewers still had some concerns that cannot be resolved by a minor revision. For instance, the unique technical challenges for the proposed problem setting remain unclear. Also, the evaluation only covers a limited set of datasets, which restricts the generalizability of the proposed benchmark.\"}" ] }
4sJIgdErt1
Unified Framework for Causal Discovery and Long-term Forecasting in Non-stationary Environments
[ "Har Simrat Singh", "Biwei Huang" ]
Non-stationary data is prevalent in various real-world domains such as climate science, economics, and neuroscience, presenting significant challenges for tasks like forecasting and causal discovery from observational data. Existing approaches often operate under the assumption that the data is stationary. In this work, we introduce a unified framework that combines long-term forecasting and causal discovery with non-linear relations in a non-stationary setting. Specifically, we assume that the nonlinear causal relations in the observed space can be transformed into linear relations in the latent space via projections. In addition, we model the non-stationarity in the system as arising from time-varying causal relations. The proposed model demonstrates that adopting a causal perspective for long-term forecasting not only addresses the limitations of each task but also makes the causal process identifiable, enhances interpretability, and provides more reliable predictions. Moreover, our approach reformulates causal discovery into a scalable, non-parametric deep learning problem. Through experiments on both synthetic and real-world datasets, we show that our framework outperforms baseline methods in both forecasting and causal discovery, underscoring the benefits of this integrated approach.
[ "causal discovery", "long-term forecasting" ]
https://openreview.net/pdf?id=4sJIgdErt1
https://openreview.net/forum?id=4sJIgdErt1
ICLR.cc/2025/Conference
2025
{ "note_id": [ "eUzg1HsreX" ], "note_type": [ "comment" ], "note_created": [ 1729737159339 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission12796/Authors" ] ], "structured_content_str": [ "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}" ] }
4sJ2FYE65U
Neural Multi-Objective Combinatorial Optimization via Graph-Image Multimodal Fusion
[ "Jinbiao Chen", "Jiahai Wang", "Zhiguang Cao", "Yaoxin Wu" ]
Existing neural multi-objective combinatorial optimization (MOCO) methods still exhibit an optimality gap since they fail to fully exploit the intrinsic features of problem instances. A significant factor contributing to this shortfall is their reliance solely on graph-modal information. To overcome this, we propose a novel graph-image multimodal fusion (GIMF) framework that enhances neural MOCO methods by integrating graph and image information of the problem instances. Our GIMF framework comprises three key components: (1) a constructed coordinate image to better represent the spatial structure of the problem instance, (2) a problem-size adaptive resolution strategy during the image construction process to improve the cross-size generalization of the model, and (3) a multimodal fusion mechanism with modality-specific bottlenecks to efficiently couple graph and image information. We demonstrate the versatility of our GIMF by implementing it with two state-of-the-art neural MOCO backbones. Experimental results on classic MOCO problems show that our GIMF significantly outperforms state-of-the-art neural MOCO methods and exhibits superior generalization capability.
[ "Neural Multi-Objective Combinatorial Optimization", "Multimodal Fusion", "Deep Reinforcement Learning" ]
Accept (Poster)
https://openreview.net/pdf?id=4sJ2FYE65U
https://openreview.net/forum?id=4sJ2FYE65U
ICLR.cc/2025/Conference
2025
{ "note_id": [ "vAWmiux6ri", "sTqr923mtn", "pIg0qAcfYZ", "koOwHg8AXK", "ZKHbYdC2RY", "WnqJPflfL1", "KbcA33wufy" ], "note_type": [ "official_review", "official_review", "official_review", "official_review", "official_review", "meta_review", "decision" ], "note_created": [ 1730721718671, 1730646151422, 1730413239534, 1730659764020, 1730461376910, 1734310344191, 1737524268957 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission13570/Reviewer_QYyp" ], [ "ICLR.cc/2025/Conference/Submission13570/Reviewer_SVeH" ], [ "ICLR.cc/2025/Conference/Submission13570/Reviewer_4hxj" ], [ "ICLR.cc/2025/Conference/Submission13570/Reviewer_3Wbv" ], [ "ICLR.cc/2025/Conference/Submission13570/Reviewer_hYPJ" ], [ "ICLR.cc/2025/Conference/Submission13570/Area_Chair_fNAY" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ] ], "structured_content_str": [ "{\"summary\": \"This paper aims to fully leverage the intrinsic features of problem instances by proposing a novel graph-image multimodal fusion framework for solving multi-objective combinatorial optimization (MOCO). The authors introduce the concept of \\\"image\\\" for MOCO to better capture the spatial structure of problem instances, enhancing the learning process. They also propose a problem-size adaptive resolution strategy to improve generalization. Finally, the paper presents a multimodal fusion mechanism with modality-specific bottlenecks to efficiently integrate graph and image information.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The writing is good.\\n\\n2. The experiments are detailed, and the results are competitive.\", \"weaknesses\": \"1. I believe the role of the image concept in CO is questionable. To some extent, using images in CO results in information loss and requires more space to represent. As far as I know, many Euclidean TSP models, like [1, 2], use positions directly as input, which requires less space and provides more precise information.\\n\\n2. Compared to typical neural MOCO methods, GIMF uses sparse matrix images as input, resulting in larger neural network sizes and an inability to handle larger-scale routing problems.\", \"questions\": \"1. What's the training loss of the GIMF?\\n\\n2. Can GIMF obtain a Pareto set of solutions?\\n\\n3. I noticed the authors used multi-modal fusion, but I don't quite understand this part. Does it mean that each subproblem requires training a single model and then performing model fusion at the end?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper presents a novel Graph-Image Multimodal Fusion (GIMF) framework designed to enhance multi-objective combinatorial optimization (MOCO) methods. By integrating both graph and image information from problem instances, the framework effectively addresses the limitations associated with relying solely on graph-modal information, particularly in the context of bi- and tri-objective traveling salesman problems (TSP).\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The main novelty of this paper lies in defining the image modality alongside the graph modality, offering a new perspective for addressing challenges in MOCO.\", \"This approach enhances conventional heuristic algorithms by leveraging the combined information from both modalities.\"], \"weaknesses\": [\"If the image modality has been defined, why still continue to use the graph modality? How could one approach solving an MOCO problem using only the image modality? Additionally, could you provide ablation studies to support this?\", \"Since the graph can also be viewed as a transition matrix, what is the relationship between reinforcement learning and the authors' algorithm?\"], \"questions\": [\"Is this analogous to a game played on a chessboard? Are the authors providing definitions and citations related to reinforcement learning? Are you aiming to transform the graph problem into a gameplay problem on a chessboard?\", \"Has reinforcement learning for MOCO in chessboard games been well studied elsewhere? If MOCO, such as TSP, is defined within the context of a chessboard game, isn\\u2019t it relatively straightforward? Does this require a graph modality, or can the image modality defined by the authors alone be sufficient to address the MOCO problems?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces a novel approach for MOCO through graph-image multimodal fusion. The framework incorporates a constructed coordinate image and efficient multimodal fusion. Experimental results on MOCO problems show the advance of the proposed GIMF.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"S1. The GIMF framework successfully combines graph and image information, enriching representation learning for MOCO problems.\\n\\nS2. The PSAR strategy and modality-specific bottlenecks for multimodal fusion are well-justified and empirically validated.\\n\\nS3. Extensive experiments are conducted.\", \"weaknesses\": \"W1. The improvement achieved by the proposed methods appears to be marginal.\\n\\nW2. The computational cost of the proposed GIMF framework seems considerably higher than state-of-the-art methods like EMNH. It seems that the performance gains come from a cost of efficiency. \\n\\nW3. Constructing images seems relatively straightforward for TSP problems, but how does this approach generalize to other real-world scenarios? For some tasks, image construction may be challenging\\u2014how do the authors envision addressing this limitation?\", \"questions\": \"N/A.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper presents a novel graph-image multimodal fusion (GIMF) framework that aims to enhance neural multi-objective combinatorial optimization (MOCO) methods. The GIMF framework integrates graph and image information of problem instances, which is designed to overcome the limitations of existing neural MOCO methods that rely solely on graph-modal information. The main contribution of the proposed method is the coordinate image construction, which provides complementary information to the graph representation.To improve the model's generalization across different problem sizes, a Problem-size Adaptive Resolution (PSAR) strategy is proposed during the image construction process, which helps maintain a stable density for both the image and patches. A multimodal fusion mechanism with Modality-Specific Bottlenecks (MSB) is designed to efficiently couple graph and image information.\\n\\nThe GIMF framework is implemented with two state-of-the-art neural MOCO backbones, namely CNH and PMOCO. Experimental results on classic MOCO problems demonstrate that GIMF can improve neural MOCO methods by providing image-modal information and exhibits superior generalization capability.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"1. The major contribution of this paper is its integration of both graph and image modalities to enhance the representation learning for MOCO problems. The construction of coordinate images and the use of PSAR strategy are innovative steps that address the limitations of relying solely on graph information. The proposed MSB in multimodal fusion mechanism is also a novel contribution.\\n\\n2. The paper is well-organized and written in a clear and concise manner. The introduction effectively sets the stage by outlining the challenges in MOCO and the motivation behind the GIMF framework. The preliminary section clearly describes the definition of the MOCO problem and related concepts, as well as the graph transformer for MOCO. The methodology section is detailed, providing a clear explanation of the image construction process, the PSAR strategy, and the multimodal fusion mechanism.\\n\\n3. The significance mainly comes from its novelty.Specifically, leveraging a multimodal approach that incorporates image-modal information, which has the potential to improve many existing neural MOCO methods. The paper is also likely to inspire further research in constructing and learning from images of MOCO problems.\\n\\n4. The experimental results suggest that GIMF does not obviously increase computational time of the neural MOCO basebone. The major innovations PSAR and MSB are validated by ablation study.\", \"weaknesses\": \"The proposed method could improve the performance of CNH and PMOCO, as well as their augment variants. However, sometimes the improvement seems marginal. In Table 1 and Table 2, the reported improvements are all mostly less than 0.001, and sometimes are as small as 0.0001. Meanwhile, the reported best results can not significantly outperform SOTA baselines.\", \"questions\": \"In Table 1 and Table 2, the reported times of GIMF-P and GIMF-C are sometimes smaller than PMOCO and CHN, what is the reason for this phenomenon?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes a generic graph-image multimodal fusion (GIMF) framework that integrates graph and image information of the problem instances to enhance neural MOCO. The framework consists of three main components: (1) a constructed coordinate image (2) a problem-size adaptive resolution strategy and (3) a multimodal fusion mechanism. Experimental results demonstrate its effectiveness.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"1.\\tThis paper is well-organized and clear writing.\\n2.\\tThe proposed method demonstrates novelty.\\n3.\\tExperiments show that GIMF performs better on classic MOCO problems.\", \"weaknesses\": \"Some details and parameter settings were not explained clearly (see Questions below).\", \"questions\": \"1. What does \\\"$\\\\bm{\\\\pi_i}$\\\" represent in the formula for calculating $\\\\nabla\\\\mathcal{L}(\\\\bm{\\\\theta})$ on line 136, or should it be changed to \\\"$\\\\pi_i$\\\"?\\n\\n2. What is the meaning of the line from $\\\\pi_t$ to $h_c$ in Figure 2? The authors should supplement the relationship between $\\\\pi_t$ and $h_c$ in the main text.\\n\\n3. Why choose the dimension of patch as $w=h=16$? The authors should provide an explanation or conduct ablation experiments.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"The paper proposes a graph-image multimodal fusion framework to enhance neural multi-objective combinatorial optimization.\", \"contributions\": \"constructing coordinate images, employing a problem-size adaptive resolution strategy, and introducing a multimodal fusion mechanism with modality-specific bottlenecks.\", \"strengths\": \"the experiments demonstrate consistent improvements over state-of-the-art method; novelty in integrating image-based information and detailed experimental validation.\", \"weaknesses\": \"marginal improvements reported in certain cases and the increased computational cost due to image processing; constructing images for some real-world problems might be challenging.\", \"additional_comments_on_reviewer_discussion\": \"The responses addressed most concerns, leading to my acceptance recommendation based on the framework\\u2019s novel approach and experimental support.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}" ] }
4sDicVEy6M
What Do You See in Common? Learning Hierarchical Prototypes over Tree-of-Life to Discover Evolutionary Traits
[ "Harish Babu Manogaran", "M. Maruf", "Arka Daw", "Kazi Sajeed Mehrab", "Caleb Patrick Charpentier", "Josef Uyeda", "Wasila M Dahdul", "Matthew J Thompson", "Elizabeth G Campolongo", "Kaiya L Provost", "Wei-Lun Chao", "Tanya Berger-Wolf", "Paula Mabee", "Hilmar Lapp", "Anuj Karpatne" ]
A grand challenge in biology is to discover evolutionary traits---features of organisms common to a group of species with a shared ancestor in the tree of life (also referred to as phylogenetic tree). With the growing availability of image repositories in biology, there is a tremendous opportunity to discover evolutionary traits directly from images in the form of a hierarchy of prototypes. However, current prototype-based methods are mostly designed to operate over a flat structure of classes and face several challenges in discovering hierarchical prototypes, including the issue of learning over-specific prototypes at internal nodes. To overcome these challenges, we introduce the framework of Hierarchy aligned Commonality through Prototypical Networks (HComP-Net). The key novelties in HComP-Net include a novel over-specificity loss to avoid learning over-specific prototypes, a novel discriminative loss to ensure prototypes at an internal node are absent in the contrasting set of species with different ancestry, and a novel masking module to allow for the exclusion of over-specific prototypes at higher levels of the tree without hampering classification performance. We empirically show that HComP-Net learns prototypes that are accurate, semantically consistent, and generalizable to unseen species in comparison to baselines. Our code is publicly accessible at Imageomics Institute Github site: https://github.com/Imageomics/HComPNet.
[ "deep learning", "interpretability", "prototype-based neural network", "phylogeny", "computer vision" ]
Accept (Poster)
https://openreview.net/pdf?id=4sDicVEy6M
https://openreview.net/forum?id=4sDicVEy6M
ICLR.cc/2025/Conference
2025
{ "note_id": [ "vzxgzCYqc8", "ulbNOzSwsI", "twQMywxK2y", "taMLVXwsAE", "pjxMB8QU7S", "pIo4XO8Gav", "oGOYoEj3uF", "mu007zQ2tL", "mo5AzqI0rl", "lt1Ow5Pa3e", "goESL6rEyu", "gLQDZrcBCz", "d0RNPOWYPL", "cf97OWa7hn", "bTZ4DXOBoY", "S02IELzq55", "R9PTmWoQxU", "PtwJcvmHfy", "PKN9poVyGs", "GM1V2f2CQo", "EgynGp289c", "C5rsG4RdmB", "8uAHNletSc", "320IolRPof", "11D5tyuJxQ", "0zIByR1PmP" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "decision", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_review" ], "note_created": [ 1734615579106, 1732192637193, 1732215473005, 1732501036470, 1732526918661, 1732903520093, 1732189048973, 1732189714385, 1730625760290, 1732614995795, 1732558837737, 1732190016160, 1732188686148, 1732903465543, 1732559855349, 1732192912296, 1730577461485, 1732630791306, 1732678362047, 1737524124638, 1730575188385, 1730691599035, 1732192544958, 1732903533017, 1733304317373, 1730533760189 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission11437/Area_Chair_FzGH" ], [ "ICLR.cc/2025/Conference/Submission11437/Authors" ], [ "ICLR.cc/2025/Conference/Submission11437/Authors" ], [ "ICLR.cc/2025/Conference/Submission11437/Reviewer_ta8u" ], [ "ICLR.cc/2025/Conference/Submission11437/Area_Chair_FzGH" ], [ "ICLR.cc/2025/Conference/Submission11437/Authors" ], [ "ICLR.cc/2025/Conference/Submission11437/Authors" ], [ "ICLR.cc/2025/Conference/Submission11437/Authors" ], [ "ICLR.cc/2025/Conference/Submission11437/Reviewer_hRXA" ], [ "ICLR.cc/2025/Conference/Submission11437/Reviewer_z89S" ], [ "ICLR.cc/2025/Conference/Submission11437/Authors" ], [ "ICLR.cc/2025/Conference/Submission11437/Authors" ], [ "ICLR.cc/2025/Conference/Submission11437/Authors" ], [ "ICLR.cc/2025/Conference/Submission11437/Authors" ], [ "ICLR.cc/2025/Conference/Submission11437/Authors" ], [ "ICLR.cc/2025/Conference/Submission11437/Authors" ], [ "ICLR.cc/2025/Conference/Submission11437/Reviewer_hsuk" ], [ "ICLR.cc/2025/Conference/Submission11437/Reviewer_hsuk" ], [ "ICLR.cc/2025/Conference/Submission11437/Reviewer_PA9n" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission11437/Reviewer_PA9n" ], [ "ICLR.cc/2025/Conference/Submission11437/Reviewer_ta8u" ], [ "ICLR.cc/2025/Conference/Submission11437/Authors" ], [ "ICLR.cc/2025/Conference/Submission11437/Authors" ], [ "ICLR.cc/2025/Conference/Submission11437/Authors" ], [ "ICLR.cc/2025/Conference/Submission11437/Reviewer_z89S" ] ], "structured_content_str": [ "{\"metareview\": \"HComP-Net (Hierarchy aligned Commonality through Prototypical Networks) is a new machine learning framework that discovers evolutionary traits in species by analysing images and learning hierarchical prototypes aligned with phylogenetic trees. The system's key contribution is its ability to identify visual features shared by species with common ancestors while avoiding \\\"over-specific\\\" prototypes that only apply to some descendant species.\", \"the_framework_introduces_three_main_technical_contributions\": \"an over-specificity loss to ensure prototypes are genuinely common across descendant species, a discriminative loss to ensure prototypes are absent in contrasting species groups, and a masking module to identify and exclude over-specific prototypes without impacting classification performance.\\n\\nTested on datasets of birds, fish, butterflies, spiders, and turtles, HComP-Net demonstrates better semantic consistency in identified prototypes compared to baselines and shows promise in generalising to unseen species. While its classification accuracy is somewhat lower than non-interpretable models, this trade-off enables clear interpretability and the generation of testable hypotheses about evolutionary traits.\\n\\nIn terms of some of its limitations, the system works best with high-quality images and is limited to analysing local, visually observable traits rather than global features. Despite this, HComP-Net represents a neat approach to automatically discover and analyse evolutionary traits from image data.\\n\\nIn a nutshell, its contribution is bridging computer vision and evolutionary biology by providing a tool that can automatically identify and analyze traits that may have evolved over time, potentially accelerating scientific discovery in evolutionary biology.\\n\\nHowever, it might be seen as talking to a narrow audience given experimental setup, but it should be acknowledged that it has been submitted to the applications to physical systems track.\", \"additional_comments_on_reviewer_discussion\": \"All five reviewers have unanimously voted to accept this paper, while recognising the very targeted nature of this paper in terms of technical novelty, comparisons with other baselines, and the overall usefulness in other domains.\\nThe authors engaged well with the reviewers and addressed most of the issues identified.\\nIn balance, everyone agrees that despite the limitations, this paper is worth publishing.\"}", "{\"comment\": \"*continuation to previous comment*\\n\\n**C5.** **Role in advancing our understanding of evolution and practical use case**\\n\\n> We agree with the reviewer that some features highlighted by HComPNet can also be identified by other models that do not consider any phylogeny. However, HComPNet\\u2019s advantage lies in assigning every discovered feature (or trait) to its respective position (ancestor node and hierarchical level) in the tree of life. For example, as shown in Figure 6, while INTR highlights the beak and neck of the western grebe species of birds, HComPNet associates these features with the correct hierarchical level, where it can be interpreted as a potential evolutionary trait. For practical use-cases, we request the reviewer to refer to Figures 5, 6, and 10\\u201313 (Appendix). At every ancestor node, a prototype suggests a hypothesis for a potential evolutionary trait that could have developed during that particular stage of evolution. Such prototypes provide a useful starting point to further investigate how the descendant species have evolved over time.\\n\\n**C6.** **How prototypical images are chosen? and challenges due to non-typical angles**\\n> This is certainly a challenge in working with organismal datasets because images are collected from various imaging perspectives and angles. For this reason, in most of our visualizations, we try to visualize multiple images (the top-k nearest patches to a prototype), so that we can get a better understanding of the visual features identified by a prototype. We will mention this in our discussion of limitations of our current work.\\n\\n**C7.** **Extending to other non-biological problems**\\n> While our method is developed with the goal of identifying evolutionary traits, it can be applied without any modifications to any task where the goal is to identify commonalities among classes in-line with a known hierarchy of classes. In particular, our approach can be applied to any fine-grained datasets with known hierarchical relationships where the classes are likely to share common features with one another.\\n\\n(We have provided list of all references in the global comment)\"}", "{\"comment\": \"The set of all reference used throughout our responses have been listed here.\\n\\n[1] Nathan D. Smith, Alan H. Turner, Morphology's Role in Phylogeny Reconstruction: Perspectives from Paleontology, Systematic Biology, Volume 54, Issue 1, February 2005, Pages 166\\u2013173, https://doi.org/10.1080/10635150590906000\\n\\n[2] Larson, A. (1998). The comparison of morphological and molecular data in phylogenetic systematics. In: DeSalle, R., Schierwater, B. (eds) Molecular Approaches to Ecology and Evolution. Birkh\\u00e4user, Basel. https://doi.org/10.1007/978-3-0348-8948-3_15\\n\\n[3] Per G P Ericson, Yanhua Qu, An evaluation of the usefulness of morphological characters to infer higher-level relationships in birds by mapping them to a molecular phylogeny, Biological Journal of the Linnean Society, 2024;, blae070, https://doi.org/10.1093/biolinnean/blae070\\n\\n[4] Vasyl Alba, James E Carthew, Richard W Carthew, Madhav Mani (2021) Global constraints within the developmental program of the Drosophila wing eLife 10:e66750\\n\\n[5] Ma, C., Zhao, B., Chen, C. and Rudin, C., 2024. This looks like those: Illuminating prototypical concepts using multiple visualizations. Advances in Neural Information Processing Systems, 36.\\n\\n[6] Liu, Z., Mao, H., Wu, C.Y., Feichtenhofer, C., Darrell, T. and Xie, S., 2022. A convnet for the 2020s. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 11976-11986).\\n\\n[7] Hase, P., Chen, C., Li, O. and Rudin, C., 2019, October. Interpretable image recognition with hierarchical prototypes. In Proceedings of the AAAI Conference on Human Computation and Crowdsourcing (Vol. 7, pp. 32-40).\\n\\n[8] Zhou, Bolei, et al. \\\"Learning deep features for discriminative localization.\\\" Proceedings of the IEEE conference on computer vision and pattern recognition. 2016.\\n\\n[9] Selvaraju, Ramprasaath R., et al. \\\"Grad-cam: Visual explanations from deep networks via gradient-based localization.\\\" Proceedings of the IEEE international conference on computer vision. 2017.\\n\\n[10] Simonyan, Karen. \\\"Deep inside convolutional networks: Visualising image classification models and saliency maps.\\\" arXiv preprint arXiv:1312.6034 (2013).\\n\\n\\n[11] Sundararajan, Mukund, Ankur Taly, and Qiqi Yan. \\\"Axiomatic attribution for deep networks.\\\" International conference on machine learning. PMLR, 2017.\\n\\n[12] Binder, Alexander, et al. \\\"Layer-wise relevance propagation for neural networks with local renormalization layers.\\\" Artificial Neural Networks and Machine Learning\\u2013ICANN 2016: 25th International Conference on Artificial Neural Networks, Barcelona, Spain, September 6-9, 2016, Proceedings, Part II 25. Springer International Publishing, 2016. \\n\\n[13] Chen, C., Li, O., Tao, D., Barnett, A., Rudin, C. and Su, J.K., 2019. This looks like that: deep learning for interpretable image recognition. Advances in neural information processing systems, 32.\\n\\n[14] Nauta, M., Schl\\u00f6tterer, J., Van Keulen, M. and Seifert, C., 2023. Pip-net: Patch-based intuitive prototypes for interpretable image classification. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 2744-2753).\\n\\n[15] Paul, D., Chowdhury, A., Xiong, X., Chang, F.J., Carlyn, D., Stevens, S., Provost, K.L., Karpatne, A., Carstens, B., Rubenstein, D. and Stewart, C., 2023. A simple interpretable transformer for fine-grained image classification and analysis. arXiv preprint arXiv:2311.04157.\", \"title\": \"References Used in the Response\"}", "{\"comment\": \"Thanks for the authors' responses. My concerns are addressed, and I'll keep my positive rating.\"}", "{\"title\": \"Please enage in the discussion\", \"comment\": \"Dear all,\\n\\nMany thanks to the reviewers for their constructive reviews and the authors for their detailed responses.\\n\\nPlease use the next ~2 days to discuss any remaining queries as the discussion period is about to close.\\n\\nThank you.\\n\\nRegards,\\n\\nAC\"}", "{\"comment\": \"Thank you very much for your valuable feedback and support!\"}", "{\"comment\": \"We thank the reviewer for the encouraging review and constructive feedback on our work. We have addressed the reviewer's comments and questions in detail, and we would be happy to respond promptly if more details are required.\\n\\n**C1.** \\u201cThe first comment is related to the choice of using visual data to identify common features in species evolution. Given the scarcity of high-quality biological images, especially in the context of vast evolutionary networks, and the inherent issues of imbalance and interference in such images, I wonder if it might be more precise to analyze common traits directly from textual descriptions or anatomical data. Could you please elaborate on the rationale behind prioritizing visual data for this task?\\u201d\\n\\n> Thanks to the reviewer for bringing up this interesting point. We agree that textual and anatomical data can also serve as a different source of information to discover evolutionary traits. However, there are two main advantages in using visual data for trait discovery compared to textual data, motivating us to work with images.\\n\\n> *First*, images allow machines to see what has not yet been recorded by biologists in the form of text. Since text-based annotations of traits are obtained through careful measurements of *known* parts of organisms (e.g., length of fins or color of beaks), they may not contain information about novel trait variations that are yet *unknown* to biologists. For example, there are subtle variations in traits such as the precise shape of leaf venations in plants or wing morphology in insects [4] that are difficult to capture in text-based descriptions but can be observed in images (we show some newly discovered traits of wing morphology on the Butterfly dataset in Figures 10, 11, and 14 to 17). By leveraging images, we can enable machines to see what biologists do not see, leading to the discovery of novel fine-grained traits linked to evolution which is the motivation for this work.\\n\\n> *Second*, the scale of image-based datasets in organismal biology is vastly greater than that of expert-curated text or anatomical datasets of species. Images are increasingly being considered as the \\u201ccurrency\\u201d for documenting biodiversity, with repositories containing millions of images of biological specimens collected by scientists in field museums or captured by drones, camera traps, or tourists posting photos on social media. On the other hand, preparing expert-curated text or anatomical data of species involve subjective and labor-intensive operations hindering rapid scientific advancements. This is especially true for newly discovered species for which detailed textual descriptions of traits may not be yet available. For such species, a vision based approach such as ours can be useful as a first approach in placing the species on the phylogeny based on shared visual traits with other species. This initial placement can then be further refined through gene sequencing [1, 2, 3]. \\n\\n> Therefore, we believe that despite the limitations of vision modality (such as imbalance and interference), tools for analyzing visual data can serve a unique purpose in the analysis of species, especially when textual or morphological data are not available in abundance.\\n\\n**C2.** \\u201cThe paper introduces several loss functions aimed at ensuring the diversity and effectiveness of the learned prototypes. I would be very interested to see ablation studies on these loss functions to better understand their individual impact.\\u201d\\n\\n> The ablation for key loss terms in our approach have been provided in Appendix Section D Table 6. We hope that it provides clarity on the impact of each individual loss term. We will be happy to provide analyses of additional ablations of our approach based on the reviewer's suggestions.\\n\\n**C3.** \\u201cIn Figure 4, when comparing the part consistency between HComP-Net and HPNet, different bird images are used. I am curious to know if this choice is justified and, if so, what the reasoning behind it is. Would it not be more appropriate to use the same images for a clearer comparison?\\u201d\\n\\n\\n> In order to have a fair comparison between the prototypes for part consistency, we have taken the top-3 closest images to the respective prototypes for visualization. Therefore, we see the set of images to differ, since top-3 closest image patches can vary between prototypes.\\n\\n(We have provided list of all references in the global comment)\"}", "{\"comment\": \"We thank the reviewer for the encouraging review and constructive feedback on our work. We have addressed the reviewer\\u2019s comments and questions in detail, and we would be happy to respond promptly if more details are required.\\n\\n\\n**C1.** **On the Need for a Separate Masking Module**\\n\\n> The masking module is solving a complementary goal that is not addressed by the other components of our framework including loss terms. In particular, we use loss terms such as the over-specificity loss and discriminative loss to ensure that prototypes learned at internal nodes are common to all descendant species of that node. However, at higher levels in the tree, since the descendants can be quite diverse going far back in the process of evolution, there may be very little to no visually noticeable traits shared between them. In such cases, to ensure the learning of non-empty sets of prototypes especially at higher levels of the tree, we introduce the masking module to \\\"slack\\\" the constraint of dropping over-specific prototypes so that we can still perform classification at lower levels of the tree. Note that these over-specific prototypes, while useful for classification at lower levels of tree, are not helpful in discovering evolutionary traits. Hence, we introduce a masking mechanism to identify and ignore such over-specific prototypes during the analysis of evolutionary traits. Also, since masks are just one additional parameter per prototype, they do not add significant complexity to the model.\\n\\n**C2.** **On the Role of the Phylogenetic Tree in Classification**\\n\\n> Although it is possible that providing hierarchical information can help with the classification task, the primary motivation behind using hierarchical relationships is not just to improve predictive accuracy but also focus on the interpretability of our discovered features in a format that is useful to scientists. In particular, the primary goal of our work is to learn prototypes that are representative of evolutionary traits. We can also see in our results that using hierarchical information does not always guarantee an improvement in performance. For example, we can see in Table 1 that the performance of HPnet [7] is relatively poor in comparison to other approaches despite its use of hierarchy.\\n\\n**C3.** **ConvNeXt-tiny performance compared to other methods on the Table**\\n\\n> We appreciate the reviewer\\u2019s inquiry. We did not explore multiple non-interpretable classification models including ConvNeXt-tiny [6] because our focus was on comparing our results with other interpretable and hierarchical classification methods. As we can observe in the below table, ConvNeXt-tiny, being a highly optimized architecture for feature extraction, does demonstrate strong classification performance. While we use ConvNeXt-tiny as the backbone, the observed improvements in performance of our approach compared to baselines are not solely due to the backbone. As shown in the loss ablations in Table 6 (Appendix Section D), learning hierarchical prototypes without incorporating our novel loss formulations is not so effective, even when ConvNeXt-tiny is used as the backbone. \\n\\n\\n| Model | Bird | Butterfly | Fish | Spider | Turtle |\\n|-----------|-------|-----------|-------|--------|--------|\\n| ConvNeXt-tiny | 84.23 | 96.28 | 91.73 | 84.05 | 69.56 |\\n\\n\\n\\n**C4.** **Possibility of obtaining similar findings using Explainable AI techniques**\\n\\n\\n> The purpose of using explainable AI (xAI) techniques, such as CAM [3], GradCAM [4], saliency maps [5], integrated gradients [6], or LRP [7], is to clarify the classification decisions made by a classifier model. For instance, CAM offers an *object-level* interpretation by identifying the most discriminative region, possibly covering the entire object that contributes significantly to the classification outcome. The explanations are not necessarily constrained to a localized region. In contrast, prototypical approaches offer a *part-level* interpretation where every learned prototype can identify and highlight a distinct localized visual feature (or a trait). Furthermore, the aforementioned xAI techniques cannot find multiple distinct features (or parts) for a given class. In the case of prototypical approaches, every prototype learned for a class can represent a distinct feature corresponding to the same class. Therefore, for the task of identifying individual traits, prototypes are better-suited compared to other xAI approaches.\\n\\n\\n(We have provided list of all references in the global comment)\"}", "{\"summary\": \"The work a novel framework called Hierarchy aligned Commonality through Prototypical Networks (HComP-Net) aimed at discovering evolutionary traits among species by learning hierarchical prototypes over the tree of life. It addresses the challenges of existing prototype-based methods that often produce over-specific prototypes at internal nodes, which can hinder the identification of common traits shared by descendant species. HComP-Net employs a unique over-specificity loss, a discriminative loss to ensure prototypes are absent in contrasting species, and a masking module to maintain classification performance. Through empirical analysis on various datasets, including birds, fishes, turtles, and butterflies, the authors demonstrate that HComP-Net effectively learns accurate, semantically consistent, and generalizable prototypes.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"This paper investigates a highly intriguing task: identifying visual features preserved during species evolution. I believe this task is inherently challenging due to the limited and often insufficient quality of training data, making it difficult to obtain stable, semantically interpretable visual features. In this work, the design of the loss function and the associated explanations are intuitive and easy to understand.\", \"weaknesses\": \"1. The first comment is related to the choice of using visual data to identify common features in species evolution. Given the scarcity of high-quality biological images, especially in the context of vast evolutionary networks, and the inherent issues of imbalance and interference in such images, I wonder if it might be more precise to analyze common traits directly from textual descriptions or anatomical data. Could you please elaborate on the rationale behind prioritizing visual data for this task?\\n\\n2. The paper introduces several loss functions aimed at ensuring the diversity and effectiveness of the learned prototypes. I would be very interested to see ablation studies on these loss functions to better understand their individual impact.\\n\\n3. In Figure 4, when comparing the part consistency between HComP-Net and HPNet, different bird images are used. I am curious to know if this choice is justified and, if so, what the reasoning behind it is. Would it not be more appropriate to use the same images for a clearer comparison?\", \"questions\": \"Please see the weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Author's rebuttal\", \"comment\": \"Thanks for the explanation and added results to reduce my concerns. I will maintain my score.\"}", "{\"title\": \"Request for Final Reviewer Queries and Score Updates\", \"comment\": \"We sincerely appreciate the insightful comments and valuable feedback provided by our reviewers. As we near the conclusion of the discussion period, we would like to extend an invitation for any further questions or clarifications. If our responses have adequately addressed your concerns, we kindly request that you consider revising your scores and confidence accordingly. Thank you for your time and dedication to this review process.\"}", "{\"comment\": \"We thank the reviewer for the encouraging review and constructive feedback on our work. We have addressed the reviewer\\u2019s comments and questions in detail, and we would be happy to respond promptly if more details are required.\\n\\n**C1.** \\u201cThe primary contribution lies in incorporating parent-child relationships to reduce over-specific prototypes within the contrastive losses, while the architecture does not appear to include specific structures for establishing a prototype hierarchy. In my understanding, proposing a hierarchical structure is a contribution of HPNet, not HCompPNet. This point should be clarified in title and method description.\\u201d\\n\\n\\n> We agree with the reviewer that HPNet [7] was the first to introduce the concept of using a hierarchical structure for learning prototypes. In our work, we build on this foundation to address two major limitations in using HPNet and other related works for discovering hierarchical prototypes, namely the issue of finding over-specific prototypes and the absence of mechanisms for excluding such prototypes at higher levels of the tree. In the related works section, we have described how our work is related to HPnet in terms of learning prototypes at every internal node of the hierarchy. To make this point even more clear, we have now made the following modification in the introduction section to ensure that the contribution of HPnet in learning hierarchical prototypes is recognized well.\\n\\n> \\u201c\\nDespite the success of ProtoPNet and its variants including *HPnet that first introduced the idea of learning hierarchical prototypes at every internal node of the tree*, there are three main challenges to be addressed while learning hierarchical prototypes for discovering evolutionary traits. ... \\n\\u201d\\n\\n**C2.** \\u201cThe exclusion of certain related works leaves me a question about novelty on practical impact. For instance, PIPNet, a recent and closely related method employing self-supervised learning (2023), is not included in the comparison. It only uses HPNet from 2019. However, semantic gaps has relevance to over-specificity, and authors also mention strong motivation from PIPNet. The reason why the network is excluded needs more detailed explanation or comparison in experiments.\\u201d\\n> We agree that results from PIPNet [14] can be a valuable addition to our performance comparison. We hereby provide the results from PIPNet on all five datasets.\\n\\n| Model | Bird | Butterfly | Fish | Spider | Turtle |\\n|-----------|-------|-----------|-------|--------|--------|\\n| PIPNet | 83.38 | 97.87 | 93.58 | 83.45 | 67.24 |\\n\\n> We have also provided comparisons with another recent interpretable approach, INTR (2024) [15].\\n\\n**C3.** \\u201cWhy does the over-specificity loss adopt a specific log-tanh loss form? Doesn\\u2019t it simply establish an arbitrary criterion for identifying overly specific prototypes? This choice requires further explanation and discussion.\\u201d\\n\\n\\n> The tanh term in our loss formulation is borrowed from a similar usage in PIPNet [14], although for a different goal of preventing representation collapse. The tanh term ensures that each prototype is activated at least once with respect to every descendant in a given mini batch of images. This accounts for the fact that some traits might occur frequently across the species, while others may only appear in specific subsets of images\\u2014for instance, traits observable exclusively in adult specimens but not in young ones. Therefore, tanh term is employed to avoid the criterion from being influenced by the frequency of occurrence of traits. Additionally, the negative logarithm of tanh provides a steeper gradient when tanh(x) is close to zero, helping with faster model convergence.\\n\\n(We have provided list of all references in the global comment)\"}", "{\"comment\": \"We thank the reviewer for the encouraging review and constructive feedback on our work. We have addressed the reviewers comments and questions in detail, and we would be happy to respond promptly if more details are required.\\n\\n(Please find the references in the global comment)\\n\\n**C1.** \\u201cIt seems that the over-specificity and discriminative losses play opposing roles in the direction of model optimization, which raises the question of whether these two losses might interact and lead to abnormal model convergence. It would be beneficial if the authors could provide some theoretical or experimental analysis on this issue.\\u201d\\n\\n> We would like to clarify that the over-specificity and discriminative losses are not playing an opposing role to each other. For clarity, the losses can also be interpreted as clustering and separation losses generally used in clustering algorithms. The over-specificity loss encourages prototypes to be equally close to all species within the positive set (descendant species), while the discriminative loss ensures that these prototypes are separated from species in the negative set (non-descendant species). Together, these losses work to align the learned prototypes with the phylogenetic structure, ensuring they are likely to represent potential evolutionary traits.\\n\\n**C2.** \\u201cThis is a minor point, but the overall method appears to be a combination of multiple techniques, making the flowchart somewhat complex and redundant.\\u201d\\n\\n> Thanks to the reviewer for the feedback. Our aim with the schematic illustration was to provide a clear and comprehensive overview of the various components of our approach. Specifically, we intended to highlight where our key contributions, such as the novel loss formulations and the masking module, are integrated into the architecture. Additionally, we chose to depict the prototypes, classification layer, and masking module, repeated for each node, to emphasize that these layers are learned for each internal node in the hierarchy. However, we appreciate the feedback and would be happy to consider any specific suggestion the reviewer may have in mind for simplifying the schematic while maintaining clarity and completeness.\"}", "{\"comment\": \"We are glad that you found our work interesting. Thank you very much for your valuable feedback and support!\"}", "{\"comment\": \"Thank you very much for your feedback and support!\"}", "{\"comment\": \"We sincerely thank all the reviewers for providing constructive feedback. We are encouraged that the reviewers found our work:\\n1. Original and interesting (hsuk, PA9n, z89S)\\n2. Well written with impressive visualizations (ta8u, z89S, hsuk)\\n3. Impactful in phylogenetic analysis (z89S, PA9n)\\n4. Intuitive and technically feasible (ta8u, hRXA)\\n\\nBefore addressing each of the reviewer's comments individually, we would like to address two general comments shared in multiple reviews. \\n\\nThe first general comment is on including **comparisons with more baselines** including ProtoPNet [13] (asked by Reviewer hsuk), PIPNet [14] (asked by Reviewer z89S), and ConvNeXt-tiny (asked by reviewer PA9n). To address this comment, we have included comparisons with additional baselines as summarized in the following table.\\n\\n| Model | Bird | Butterfly | Fish | Spider | Turtle |\\n|-------------|-------|-----------|-------|--------|--------|\\n| PIPNet | 83.38 | 97.87 | 93.58 | 83.45 | 67.24 |\\n| ConvNeXt-tiny | 84.23 | 96.28 | 91.73 | 84.05 | 69.56 |\\n| ProtoPNet | 75.80 | 96.82 | 91.11 | 74.17 | 61.74 |\\n\\nThe second general comment is on the **primary objective of our work** (hsuk and z89S). We would like to re-emphasize that the main motivation for building HComP-Net is to learn semantically meaningful prototypes of a hierarchy of classes (e.g., the tree of life) that helps in *explaining* what are the common features (or traits) that are shared by multiple classes. Our focus is on discovering hierarchical prototypes that are explainable and scientifically useful, e.g., in the target application of discovering evolutionary traits. While the use of hierarchy also helps HComP-Net in improving classification accuracy as demonstrated in our results in Table 1, achieving the best classification accuracy that beats all state-of-the-art (SOTA) classification methods is not our end-goal, unless the learned features are explainable and scientifically useful for discovering hierarchical prototypes. We demonstrate the ability of our approach in discovering semantically meaningful hierarchical prototypes in comparison with baseline hierarchical methods both qualitatively and quantitatively in Section 5.3 of the paper.\\n\\nWe hope that our responses to the individual review comments provided below address the main concerns of the reviewers. If we missed any detail, we will be happy to provide more clarifications during the discussion period. If our responses have adequately addressed your concerns, we kindly request that you consider revising your scores. Thank you very much for your time and effort.\", \"title\": \"Global Response to All Review Comments\"}", "{\"summary\": \"The paper introduces a new framework that can be used for learning biological prototypes within hierarchies while avoiding the learning\\nof over-specific features at internal nodes of the genetic tree. The authors perform tests with different datasets including mostly the pictures of birds, fishes and butterflies. The authors focus on the quantitative and qulitative evaluation.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The idea presented in the paper is interesting and original. The idea is quite simple (which is a plus). I liked that the authors aimed to present a method that can help to facilitate the analysis of different species in biology.\", \"The paper is well-written, and also presents a lot of nice and clean graphics and examples that make the content understandable.\"], \"weaknesses\": [\"The motivation of the paper says that there are many image datasets in the biology, so ML can be used to provide some new visual suggestions for common traits in species belonging to a common group. Nevertheless, such suggestions seem to be more important for some newly discovered species (and not the ones that are already well-known), and for such species there is a possibility of not having so many images. This can decrease the practicality of the method. Statements such as \\u201cFurthermore, HComP-Net demonstrates a unique ability to generate novel hypotheses about evolutionary traits, showcasing its potential in advancing our understanding of evolution\\u201d are too bold in my opinion.\", \"Another issue with a possible practical use of the method is that the proposed solution to provide semantically meaningful information requires human annotation, which can be very subjective. The authors mention this limitation in the appendix, however they do not give any solution for a mitigation.\", \"minor: some typos/grammatical mistakes can be found in the paper, e.g. \\u201cthat are not just shared across all it descendant species but are also\\u201d -> \\u201cthat are not just shared across all ITS descendant species but are also\\u201d\"], \"questions\": [\"The paper discusses many similarities with ProtoPNet and refers to it often, e.g. \\u201cFor HPnet, we used the same hyperparameter settings and training strategy as used by ProtoPNet for the CUB-200-2011 dataset\\u201d, \\u201cWe follow the same training strategy as provided by ProtoPNet for the CUB-200-2011 dataset.\\u201d. Why ProtoPNet is not used in the comparative experiments (also other non-hierarchical models are used there)?\", \"How can the paper help to \\u201cadvance our understanding of evolution\\u201d? E.g. There is a high chance that for the common species (whose pictures are available in large-scale datasets) the same features that were already used to make such a classification will be highlighted. Could the authors present a use case, in which the solution could be successfully used in practice to contribute to the understanding of evolution?\", \"How are the representative images (also could be perceived as prototypical images) chosen from the dataset to allow for proper explanations (some images can be taken e.g. from a non-typical angle and do not show the prototypical features well and make the explanations difficult)?\", \"Is it possible (and if yes, how) to use this solution in other hierarchical problems (not in the area of biology)?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"I would like to thank the authors for their thorough answer. As I perceive the paper interesting, I will raise my score to make it positive, good luck!\"}", "{\"comment\": \"Thank you, authors, for addressing my questions and concerns. I appreciate the clarity provided and find the explanations satisfactory. I will maintain my positive scores.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"summary\": \"This work presents an extended approach for hierarchical prototype learning by training a neural network to discover hierarchical prototypes using structured information from a phylogenetic tree. Building upon HPNet (Haze et al., 2019), this approach, termed HComP-Net, contrasts with traditional models that use flat structures by incorporating loss functions such as over-specificity loss and discriminative loss. These functions enable HComP-Net to learn prototypes that align with the hierarchical structure of the tree, enhancing interpretability and consistency. Empirical results highlight HComP-Net\\u2019s ability to produce accurate, semantically coherent prototypes transferable to unseen species. Tested on a dataset of 190 bird species and additional organisms, the model performs better than baseline models. Additionally, HComP-Net has been shown to generate visual hypotheses on evolutionary traits, offering insight into traits across various levels of the phylogenetic tree. This research underscores the potential of interpretable representation learning using structured hierarchical prior knowledge.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"Originality: The research builds on the problem initially formulated in HPNet (Haze et al., 2019), specifically addressing the challenge of over-specific prototypes. The authors introduced over-specificity and discriminative loss functions, enabling HComP-Net to learn prototypes that adhere to the hierarchical structure of a phylogenetic tree. This approach enhances interpretability and brings a fresh perspective to prototype learning.\", \"quality\": \"Given the identified problem of over-specific prototypes, the authors demonstrate the efficacy of their methods across fine-grained classification tasks, also showing advancements in interpretability. The proposed model performs consistently well in these tasks, validating its quality and effectiveness.\", \"clarity\": \"The paper is well-organized, with clear explanations of the background, literature, methodology, experimental setup, and results. This clarity enhances the readability and accessibility of the research, making its contributions understandable and well-supported.\", \"significance\": \"This work highlights the potential of interpretable representation learning driven by structured hierarchical knowledge. By using a phylogenetic tree for guidance, the model provides insights into trait evolution and aligns closely with the biology-inspired data structure, marking a contribution at the intersection of AI and biology.\", \"weaknesses\": \"1) On the Need for a Separate Masking Module\\nThe fact that an additional masking module is required suggests that the initial contributions, particularly the over-specificity and discriminative loss functions, may not fully address the issue of over-specific prototypes. This need points to a limitation in the proposed loss functions' effectiveness in preventing prototypes from becoming overly tailored to specific species. Ideally, a more robust solution would directly manage prototype specificity through the loss functions alone, reducing reliance on extra modules that could complicate the model and potentially impact interpretability or scalability.\\n\\n2) On the Role of the Phylogenetic Tree in Classification\\nIn section 5.1, the authors suggest that achieving high classification accuracy is not the primary goal. However, providing the model with additional information, like a phylogenetic tree during training and inference, could indeed aid classification performance. The tree may allow the model to leverage hierarchical relationships, which could enhance classification accuracy by using shared traits among related species. Thus, there seems to be a disconnect between the claim of not prioritizing classification accuracy and the model's design, which inherently includes information that could enhance it.\", \"questions\": \"1) The CNN backbone used was ConvNeXt-tiny architecture; why the fine-grained accuracy from this architecture is not included in Table 1? Will it be better/worse when compared to other methods on the Table?\\n\\n2) In terms of semantically meaningful prototypes, could you discuss the possibility of obtaining similar findings using Explainable AI techniques?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"NA\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper applies deep learning techniques to uncover evolutionary traits in biological data. It leverages contrastive and orthogonality losses to facilitate hierarchical prototype learning. Additionally, the paper introduces over-specificity and discriminative losses to guide and constrain model training. The proposed method demonstrates improved performance over baseline methods across multiple benchmark datasets.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The motivation is interesting and well-justified. The hierarchical structure of biological data presents significant challenges for distinguishing species and identifying evolutionary traits.\\n2. The techniques employed in HComP-Net appear technically feasible. The use of contrastive loss for learning clustered features has proven effective in self-supervised learning, while orthogonality loss helps capture diverse features.\\n3. The visualization results are clear and impressive.\\n4. The paper is well-structured and easy to follow.\", \"weaknesses\": \"1. It seems that the over-specificity and discriminative losses play opposing roles in the direction of model optimization, which raises the question of whether these two losses might interact and lead to abnormal model convergence. It would be beneficial if the authors could provide some theoretical or experimental analysis on this issue.\\n2. This is a minor point, but the overall method appears to be a combination of multiple techniques, making the flowchart somewhat complex and redundant.\", \"questions\": \"Please kindly refer the the weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We thank the reviewer for the encouraging review and constructive feedback on our work. We have addressed the reviewer\\u2019s comments and questions in detail, and we would be happy to respond promptly if more details are required.\\n\\n**C1.** **Practicality of our method for newly discovered species**\\n\\n> We agree with the reviewer that a newly discovered species will have limited images in comparison to a well known species. However, a first step in analyzing a newly discovered species involves placing the species in the phylogeny by analyzing the synapomorphies (traits shared between species with common ancestor) it shares with existing species, before the position is further refined with gene sequencing [1, 2, 3]. For such scenarios, an interpretable model such as ours trained on a set of existing species can be useful to identify the features the new species shares with the existing species, thereby providing an initial placement of the species in the phylogeny. This process can be done with few clean images from the new species. We also perform a similar experiment with unseen species in Section 5.1 Table 2.\\n\\n> Moreover, we would like to note that the number of images required in practice for training HComPNet is not significantly high. For example, the average images per species for the Bird dataset in our experiment is just close to 30, which shows that our approach can generate meaningful results even with few clean images per species.\\n\\n\\n**C2.** **Subjectivity in the human interpretation of prototypes and ways to overcome it**\\n\\n> Subjectivity of interpretation is a common challenge to all approaches in interpretable AI, including prototypical networks. Subjectivity can be lessened with the inclusion of more modalities of data such as textual descriptions along with images. But this direction, to the best of our knowledge, has not been explored in any of the previous works involving prototypical networks. Our work produces prototypical parts of every species linked to evolution that are easy for biologists to visualize. However, the validation of our discovered prototypes as newly hypothesized evolutionary traits still needs further investigation by biologists. \\n\\n> A possible solution to reduce the subjectivity in the analysis of prototypes is to visualize the prototype in multiple training images instead of a single image (e.g., the top-k image patches closest to the prototype) so that the semantic meaning of the prototype is more apparent. A recent approach called ProtoConcepts [5] explores modifying the similarity metric to allow multiple image patches to be equally close to a given prototype. Such visualizations may help with understanding the underlying semantic concept better. We leave such an extension of our work for future research to explore.\\n\\n\\n**C3.** **Minor: some typos/grammatical mistakes**\\n\\n> Thanks to the reviewer for pointing out the error. We have corrected the errors and updated the paper.\\n\\n**C4.** **Comparative experiment with ProtoPNet**\\n\\n> Since the hierarchical extension of ProtoPNet [13] (HPnet [7]) was available, comparing with HPnet seemed more reasonable. However, we hereby provide the performance of ProtoPNet on all the five datasets. We note that while ProtoPNet is performing consistently well, its hierarchical extension HPnet is unable to show consistent good performance on all datasets. On the other hand, HComP-Net shows consistently good performance (although not always better than ProtoPNet) while also learning prototypes at various levels in hierarchy. As mentioned in our global response, the main objective of our work is not to achieve the highest classification performance, but instead to discover semantically meaningful prototypes that help us explain the process of species evolution and serve as candidates in the discovery of evolutionary traits.\\n\\n| Model | Bird | Butterfly | Fish | Spider | Turtle |\\n|-----------|-------|-----------|-------|--------|--------|\\n| ProtoPNet | 75.80 | 96.82 | 91.11 | 74.17 | 61.74 |\\n| HPnet | 36.18 | 94.69 | 77.51 | 5.85 | 6.38 |\\n| HComP-Net | 70.01 | 97.35 | 90.80 | 76.19 | 58.26 |\\n\\n*continued in next comment ...*\"}", "{\"comment\": \"Thank you very much for your valuable feedback and support!\"}", "{\"title\": \"Final summarization of major reviewer concerns and responses\", \"comment\": \"We would like to once again thank the reviewers for their time and effort in the review process. We hereby provide a concise summary of some of the key concerns raised by the reviewers and our responses to clarify them. Although we have covered only the major concerns here, complete set of queries and the detailed responses can be found under the respective reviewer comments.\\n\\n**C1. Primary objective of our work** (Reviewers hsuk and z89S)\\n> As we have discussed in the global comment, our primary objective is to learn semantically meaningful prototypes representing common features (or traits) that are shared by multiple species in line with the phylogeny, thereby discovering potential evolutionary traits. Therefore, our motive is not necessarily to achieve the best classification accuracy that beats all state-of-the-art (SOTA) classification methods. We demonstrate the ability of our approach in discovering semantically meaningful hierarchical prototypes in comparison with baseline hierarchical methods both qualitatively and quantitatively in **Section 5.3 and 5.4** of the paper.\\n\\n**C2. Regarding the choice of using visual data for finding common traits** (Reviewer hRXA)\\n> We would like to note two advantages in working with visual data. *Firstly*, images can capture subtle variations in traits such as the precise shape of leaf venations in plants or wing morphology in insects [4] that are difficult to capture in text-based descriptions. *Secondly*, with the growing availability of image datasets in organismal biology, images are easier to obtain compared to expert-curated text or anatomical data. This is especially true for newly discovered species for which detailed textual descriptions of traits may not be yet available. For such cases, vision-based approaches can be useful in performing initial placement of the species in the phylogeny. Therefore, vision-based tools can serve a unique purpose in the analysis of species, especially when textual or morphological data are not available in abundance.\\n\\n**C3. Subjectivity in the human interpretation of prototypes and ways to overcome it** (Reviewer hsuk)\\n> Subjectivity of interpretation is a common challenge to all approaches in interpretable AI, including prototypical networks. A possible solution to reduce the subjectivity is to visualize the prototype in multiple training images so that the semantic meaning of the prototype is more apparent. For this reason, we visualize top-k images per prototype in most of our visualizations instead of a single image. \\n\\n**C4. Practicality of our method for newly discovered species with less available images** (Reviewer hsuk)\\n> Newly discovered species are likely to have less available images in comparison to well known species. However, a first step involved in placing a new species on the phylogeny is analyzing synapomorphies (traits shared between species with common ancestor), before the position is further refined with gene sequencing [1, 2, 3]. Our interpretable model can serve as a tool for performing such initial placement. We also perform a similar experiment with unseen species in **Section 5.2 Table 2**. Moreover, in our experiments we work with Bird dataset where the average number of images per class is only about 30, which shows that our approach can generate meaningful results even with few clean images per species.\\n\\n**C5. On the Need for a separate masking module in addition to the loss terms** (Reviewer PA9n)\\n> Masking module serves a complementary role that is not addressed by the loss terms. Over-specificity and discriminative losses ensure the prototypes learned at internal nodes are common to all descendant species of that node. However, at higher levels of tree, the descendants are quite diverse and therefore have little to no visually noticeable shared traits between them. At such higher levels, over-specific prototypes are required to ensure a non-empty set of prototypes to meet the classification objective. However, since they do not represent potential evolutionary traits, masking module helps us to identify and exclude such over-specific prototypes during the analysis of evolutionary traits.\\n\\n(We have provided list of all references in the global comment)\"}", "{\"summary\": \"This paper addresses a scientific problem: discovering evolutionary traits in biology from images. To tackle this, it proposes the Hierarchy-Aligned Commonality through Prototypical Network (HComP-Net). The primary objective is to reduce over-specific prototypes that lose common features expected to be observed in all species sharing a common ancestor. To achieve this, the approach applies contrastive, orthogonality, and discriminative losses to the prototypes, and introduces over-specificity loss and masking to mitigate over-specific prototypes.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The method details are well-demonstrated, with a clear overview and examples that make the content easy to follow.\\n\\nThe study presents a novel approach to improving the interpretability of prototypical networks, allowing for more accurate representation of parent-child relationships.\\n\\nIt enhances the specialization of deep networks applied in scientific discovery, particularly in phylogenetic analysis.\", \"weaknesses\": \"Two limits of experimental demonstration.\\n1. The primary contribution lies in incorporating parent-child relationships to reduce over-specific prototypes within the contrastive losses, while the architecture does not appear to include specific structures for establishing a prototype hierarchy.\\nIn my understanding, proposing a hierarchical structure is a contribution of HPNet, not HCompPNet. This point should be clarified in title and method description. \\n2. The exclusion of certain related works leaves me a question about novelty on practical impact. For instance, PIPNet, a recent and closely related method employing self-supervised learning (2023), is not included in the comparison. It only uses HPNet from 2019. However, semantic gaps has relevance to over-specificity, and authors also mention strong motivation from PIPNet. The reason why the network is excluded needs more detailed explanation or comparison in experiments.\", \"questions\": \"Why does the over-specificity loss adopt a specific log-tanh loss form? Doesn\\u2019t it simply establish an arbitrary criterion for identifying overly specific prototypes? This choice requires further explanation and discussion.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}" ] }
4rEI2JdHH6
Let Me Grok for You: Accelerating Grokking via Embedding Transfer from a Weaker Model
[ "Zhiwei Xu", "Zhiyu Ni", "Yixin Wang", "Wei Hu" ]
''Grokking'' is a phenomenon where a neural network first memorizes training data and generalizes poorly, but then suddenly transitions to near-perfect generalization after prolonged training. While intriguing, this delayed generalization phenomenon compromises predictability and efficiency. Ideally, models should generalize directly without delay. To this end, this paper proposes GrokTransfer, a simple and principled method for accelerating grokking in training neural networks, based on the key observation that data embedding plays a crucial role in determining whether generalization is delayed. GrokTransfer first trains a smaller, weaker model to reach a nontrivial (but far from optimal) test performance. Then, the learned input embedding from this weaker model is extracted and used to initialize the embedding in the target, stronger model. We rigorously prove that, on a synthetic XOR task where delayed generalization always occurs in normal training, GrokTransfer enables the target model to generalize directly without delay. Moreover, we demonstrate that, across empirical studies of different tasks, GrokTransfer effectively reshapes the training dynamics and eliminates delayed generalization, for both fully-connected neural networks and Transformers.
[ "Grokking", "feature learning", "deep learning theory" ]
Accept (Poster)
https://openreview.net/pdf?id=4rEI2JdHH6
https://openreview.net/forum?id=4rEI2JdHH6
ICLR.cc/2025/Conference
2025
{ "note_id": [ "z3dvQlXylh", "z0fZULotPz", "wwZapcX9yD", "vXoPLJU8Ri", "nlCTe1NzmD", "jKhJBoQ4bk", "hcPBq8fwNW", "eQubEFJ7er", "d4EvuVA096", "cT7Hhwvssu", "cByjvWlXsI", "YEna2tX5CV", "XSOwCttuP4", "UL6xEnEIJl", "RRioaqiIZ2", "RGYG3Hx69V", "PTI7uuGswX", "MIc7CpRFmv", "M2aRDG1c8t", "JVHxZMn8EW", "IWnH2Lk06S", "IHgvCpZ73Z", "DnpG1Ybbge", "4BICk7z7LW" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "decision", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1734871620357, 1732381941151, 1732382436457, 1731043450181, 1732862091632, 1732381110324, 1732381620541, 1732597658570, 1732685683984, 1730601183039, 1737523613328, 1732512813686, 1732381534019, 1731128895786, 1733194277335, 1730407674508, 1732692122494, 1733194348544, 1732383702198, 1732380788695, 1730905791644, 1732382978752, 1732383238879, 1732620989165 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission4005/Area_Chair_wTJP" ], [ "ICLR.cc/2025/Conference/Submission4005/Authors" ], [ "ICLR.cc/2025/Conference/Submission4005/Authors" ], [ "ICLR.cc/2025/Conference/Submission4005/Reviewer_g3Q3" ], [ "ICLR.cc/2025/Conference/Submission4005/Authors" ], [ "ICLR.cc/2025/Conference/Submission4005/Authors" ], [ "ICLR.cc/2025/Conference/Submission4005/Authors" ], [ "ICLR.cc/2025/Conference/Submission4005/Authors" ], [ "ICLR.cc/2025/Conference/Submission4005/Reviewer_uoKT" ], [ "ICLR.cc/2025/Conference/Submission4005/Reviewer_uoKT" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission4005/Reviewer_uoKT" ], [ "ICLR.cc/2025/Conference/Submission4005/Authors" ], [ "ICLR.cc/2025/Conference/Submission4005/Reviewer_GjdD" ], [ "ICLR.cc/2025/Conference/Submission4005/Authors" ], [ "ICLR.cc/2025/Conference/Submission4005/Reviewer_s3cV" ], [ "ICLR.cc/2025/Conference/Submission4005/Reviewer_s3cV" ], [ "ICLR.cc/2025/Conference/Submission4005/Authors" ], [ "ICLR.cc/2025/Conference/Submission4005/Authors" ], [ "ICLR.cc/2025/Conference/Submission4005/Authors" ], [ "ICLR.cc/2025/Conference/Submission4005/Reviewer_opL6" ], [ "ICLR.cc/2025/Conference/Submission4005/Authors" ], [ "ICLR.cc/2025/Conference/Submission4005/Authors" ], [ "ICLR.cc/2025/Conference/Submission4005/Reviewer_opL6" ] ], "structured_content_str": [ "{\"metareview\": \"This paper proposes Groktransfer, a new method to accelerate groking based on transferring embeddings from a weak model. The authors provide theoretical analysis on a simple XOR classification task and numerous empirical experiments on modular addition, multiplication, and additional results on mnist in the rebuttal with different architectures of weak models and target models. There were some concerns regarding the implementation to transfer embedding matrix (low rank, full rank of A, B) from weak model to target model, and it seems addressed by authors' additional results (Fig 15). The authors should make this part clear in the main text (that not requiring the low rank of A, B) and move Fig 15 to main draft. Given the theoretical and empirical contributions of this work (novelty and superior performance), I recommend to accept this paper. The authors are urged to release the code for reproducibility and further improve the paper by including discussions/clarification paragraphs and in-depth comparison with baselines on more datasets based on reviewers questions and comments in the rebuttal period.\", \"additional_comments_on_reviewer_discussion\": \"There are some shared concerns from the reviewers regarding the details and applicability of the proposed methods (to more tasks) and the authors addressed the concerns by providing additional experiments in the rebuttal, including more in-depth study on XOR cluster data, modular additional task, and FNN-t-Transformer scenario, and mnist experiments. The authors also strength the paper by comparing with a recent method Grokfast to show it has faster speed for groking.\\n\\nThere are also some questions regarding the construction of embedding matrix in Groktransfer and the authors provided additional results showing that the rank of embedding matrix in the weak model do not have specific requirement, and the crucial point is to initialize the target model's embedding matrix from the weak models'.\"}", "{\"comment\": \"We thank the reviewer for the detailed review and for appreciating the contributions of our paper. It is encouraging to see that the reviewer thinks our method is \\u201cinteresting\\u201d and \\u201cnovel\\u201d and has a detailed case study.\\n\\n- ***It is unclear how well the method would generalize and scale to more complicated problems where such acceleration can make a real impact. ...... It would be interesting to see if this is really the case or not.***\\n\\nThank you for your suggestion. We have added results for the MNIST dataset in Figure 14 (page 27) in the updated pdf, where grokking was observed with large initialization in [1]. Following the setting in [1], 200 samples are randomly selected as the training data. We train a three-layer MLP as the weak model and transfer its embedding to the target model, a four-layer MLP, and compare its dynamics with training from scratch, both with initialization scale $\\\\alpha$=100. Figure 14 shows that GrokTransfer outperforms training from scratch and almost eliminates the generalization delay.\\n\\n- ***The paper does not provide much insight on how to design or choose the weaker model. ......***\\n\\nThanks for the comment. The general rule of thumb we found is to start from a computationally light model (e.g. a small MLP) as the weak model and see if it can show some degree of generalization. If not, we can gradually grow the model to a larger size until it shows nontrivial generalization.\\n\\nTake modular arithmetic tasks as an example. Empirically it\\u2019s sufficient for the target model to perfectly generalize if the weak model can achieve around 40% validation accuracy on modular arithmetic tasks, and a two-layer MLP with 4-dim embedding can already learn a good embedding with periodic structure. See Figure 12 (page 27) in the updated pdf. The t-SNE embedding shows the embedding has structure similar to Fourier embedding with frequency 1/66.\\n\\n- ***As the authors have mentioned in the paper, the theoretical result only considers a relatively simple XOR task. There does not seem to be any clear indication if the analysis could potentially be applied to more general problems. Therefore, the significance of this result is in doubt.***\\n\\nWe agree that our theoretical result is in a simple setting and we do not think that the same analysis applies to more general problems. Note that rigorously analyzing the training dynamics in neural networks is a big theoretical challenge and even simple settings like ours are already difficult to analyze. Our theoretical result is meant to provide some initial justification and insights about how our algorithm can possibly work, especially when the weaker model doesn\\u2019t generalize optimally.\\n\\n- ***Would a \\u201csmoother\\u201d version of the proposed method work even better? For example, one could start from a single embedding layer with a small number of neurons and gradually deepen/widen it, i.e., adding more layers and adding more neurons to each layer.***\\n\\nWe appreciate the reviewer\\u2019s question and suggestion. We think this is an interesting idea. For the purpose of accelerating grokking, we found that transferring from a single weaker model to a target model is already sufficient. We agree that the proposed \\u201csmoother\\u201d version is potentially useful and is worth studying in future work.\\n\\n[1] Omnigrok: Grokking Beyond Algorithmic Data\"}", "{\"comment\": \"We thank the reviewer for their review and address their comments below. We hope that the response satisfactorily addresses the reviewer\\u2019s questions and that the reviewer will consider raising their score in light of our response.\\n\\n- ***The authors tested three algorithmic tasks. ...... Without these experiments, I can't help but think that the generalizability of GrokTransfer is limited.***\\n\\nThank you for your question. We have included the comparison for modular multiplication task in Figure 13 Left (page 27) in the updated pdf. As for the (40,3)-parity task, we followed the format in [1] and trained several transformers on the task, none of them showing delay generalization or grokking. Please see Figure 13 Right for the dynamics of a 2-layer and a 4-layer decoder-only transformer on (40,3)-parity task. We hypothesize that due to the existence of positional embedding, it is much easier for the transformer to identify the signal tokens than MLP, thus grokking is not observed.\\n\\n- ***Regarding W1, the FNN->transformer setting was tested on only one task.......***. \\n\\nThank you for your suggestion. We have included the comparison with GrokFast on modular multiplication tasks in the updated pdf. As shown in Figure 16 (page 28), our method still outperforms GrokFast.\\n\\n- ***Grokking is known to occur beyond algorithmic data [1], which is already cited in the paper. I would like to see how GrokTransfer performs on non-algorithmic tasks, such as image classification with MNIST, as explored in [1].***\\n\\nThank you for your suggestion. We have added results for the MNIST dataset in Figure 14 (page 27) in the updated pdf. Following the setting in [1], 200 samples are randomly selected as the training data. We train a three-layer MLP as the weak model and transfer its embedding to the target model, a four-layer MLP, and compare its dynamics with training from scratch, both with alpha=100. Figure 14 shows that GrokTransfer outperforms training from scratch and almost eliminates the generalization delay.\\n\\n\\n- ***A new paper [2] has been recently released on accelerating grokking through weight transfer based on the Kolmogorov-Arnold (KA) representation theorem......***\\n\\nIn section 4.3 in [2], it studies weight transfer across different arithmetic tasks. Specifically, it first trains a transformer on one task then transfers its embedding to another task, which is very similar to what [3] did. \\n\\nE.g. [3] tried to first train a transformer on modular addition till perfect generalization, and transfer the trained embedding to another transformer on modular multiplication. **In Figure 2 in [3], they found that it will actually prevent the new transformer from perfect generalization.**\\n\\nGiven a task A, the goal of [2,3] is to construct informative embedding from different but similar tasks B,C to accelerate generalization, while our idea is to utilize existing training data of A to construct informative embedding. Their weight transfer across different tasks implies that the model is trained on many more data points than ours and needs to be trained multiple times, which is not directly comparable to our work.\\n\\n- ***There are reference mistakes in the paper, concerning arXiv citations that should point to conference proceedings.***\\n\\nThank you for pointing out this issue. We have fixed it in the updated pdf.\\n\\n[1] Omnigrok: Grokking Beyond Algorithmic Data\\n\\n[2] Acceleration of Grokking in Learning Arithmetic Operations via Kolmogorov-Arnold Representation\\n\\n[3] Towards Empirical Interpretation of Internal Circuits and Properties in Grokked Transformers on Modular Polynomials\"}", "{\"summary\": \"This paper proposes a method to accelerate grokking via embedding transfer from a week model. The authors first observe that data embedding plays a crucial role in determining whether generalization is delayed. Then they initialize the embedding weights extracted from training a smaller and weaker model to the target, strong model. Finally, the authors give rigorous proof on a synthetic XOR task with simple network settings, and provide empirical studies showing the effectiveness of the proposed GrokTransfer method on fully-connected network and transformers.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1) This paper is clearly written and the idea is easy to follow.\\n2) The choice of the task (XOR) and the setting of the model (very simple network) is easy to interpret for the purpose of the finding of this paper.\", \"weaknesses\": \"1) the key observation which is that the data embedding plays a crucial role in the generalization delay is not well studied. As the proposed accelerating method is based on this observation, it would be better to provide more evidence to say this observation is indeed the correct one. In addition to the modular addition task, providing more results on other tasks would make this observation more convincing. Also, instead of data embedding layer only, is the initialization of other layers a possible reason to cause the delay? An ablation study on other possible reasons to cause the delay would be better.\\n\\n2) In addition to simple settings, verification on more complicated tasks would make the finding more general and solid.\\n\\n3) In the theoretical analysis, the proof seems a little bit limited because the choice of very simple setting.\\n\\n4) It would be better if the authors make a clearer study on how to choose the \\\"weaker and smaller\\\" model to get the acceleration with good test performance.\", \"questions\": \"1) In section 3.2, could you add more details on getting the SNR with different embeddings?\\n\\n2) The intuition on why the data embedding make a difference on the generalization speed is still not very clear. \\n\\nIf we take all layers beyond the last linear layer (which includes data embedding layer and some hidden layer) as doing feature engineering, as mentioned in [1], will this claim (in section 3.2, ``with this new embedding, data points are well separated in a three-dimensional space with a relatively high signal-to-noise ratio (SNR) compared to the original embedding'') still hold if we pass this data embedding into above hidden layers? Does the separation among the feature vectors got from the last hidden layer make more sense than the one among data embeddings?\\n\\n\\n\\n[1] Papyan, V., Han, X. Y., & Donoho, D. L. (2020). Prevalence of neural collapse during the terminal phase of deep learning training. Proceedings of the National Academy of Sciences, 117(40), 24652-24663.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your reply and for taking time to engage with our response. We believe there may be some misunderstandings regarding certain aspects of our method, particularly the source of the generalization speed-up and the interpretation of our results in Figure 15. We would like to clarify these points further to address your concerns.\\n \\n- ***The authors argue that A and B can be merged into the full matrix E_T at the end of training, , with minimal effects on the performance.***\\n\\nWe wish to clarify that we did not claim A,B can only be merged at the end of training. Instead, this merge operation can happen in an early training phase, e.g. at the 100th epoch\\u2014when the model has less than 10% test accuracy, as shown in Figure 15(b).\\n\\n- ***The mentioned Figure 15(b) does not seem to mention what task this is evaluated on***\\n\\nThank you for pointing this out. Figure 15(b) is trained on the modular addition task, with the same weak and target model in Figure 6(a): a 2-layer MLP as the weak model and an 8-layer transformer (embedding dim=512) as the target model.\\n\\n- ***This makes the experiment rather self-fulfilling, since a fully parameterized E_T matrix is not even necessary for the task in the first place. Instead, it would be more convincing if the authors were to demonstrate experiments on tasks where the greater expressiveness provided by E_T is necessary for achieving good performance.***\\n\\nEven though from an expressive power point of view a low-rank embedding matrix suffices, that doesn\\u2019t mean it\\u2019s easy to find a solution that generalizes optimally. As we showed in Figure 15a, directly training low-rank factorization AB from random initialization only yields <50% test accuracy in the modular addition problem. On the other hand, with the standard fully parameterized embedding E_T, the model can eventually achieve 100% test accuracy, but only through grokking. This means that from the training point of view, if we only consider random initialization, the full parameterization is still necessary.\\n\\nBy using an initialization transferred from a weaker model, our method GrokTransfer clearly improves over random initialization (with or without low-rank parameterization).\\n\\n- ***Without the low rank conditions, the main contribution and strength of the method, which is providing large speed-ups in combined training time, disappears.***\\n\\nWe wish to clarify that **the embedding dimension of the weak model takes up only a small proportion of the total computation FLOPs** in the model and has little influence on the combined training time. Factors like model width, depth, architecture, and the use of CUDA have a more significant impact on training time. In GrokTransfer, the combined training time of the weak and target models is not drastically affected by the weak model's embedding dimension, as only a small portion of the target model's FLOPs comes from the embedding layer, whose FLOPs depend linearly on this dimension.\\n\\nTo investigate the influence of the weak model's embedding dimension on training time, we have added ablation experiments by varying the embedding dimension of the weak model. The results are summarized in the table attached below, showing that the training time per epoch does not drastically change with the embedding dimension. \\n\\n### **Wall-clock Time (ms) of the Backward Pass (Model Training) per Epoch (averaged on 100 epochs)**\\n| **Embedding Dim (Weak model)** | **Weak Model** | **Target Model (GrokTransfer)** | **Target Model (scratch)** |\\n|---------------|---------------------|--------------------|------------------|\\n| 4 | 0.746 | 36.119 | 35.642 |\\n| 8 | 0.702 | 37.360 | 35.642 |\\n| 128 |0.855 | 36.724 | 35.642 |\\n| 256 | 0.801 | 37.033 | 35.642 |\\n| 512 | 0.827 | 37.650 | 35.642 |\"}", "{\"title\": \"Response to Questions\", \"comment\": \"- ***Could you comment on why you chose a weak model with 3 neurons......***\\n\\nWe use 3 neurons because we want to show that even when the weaker model doesn\\u2019t generalize perfectly, its embedding can still help the target model to achieve optimal generalization with much faster speed, which matches our empirical observation. If the weaker model has 4 (or more) neurons and achieves perfect generalization itself, the story won\\u2019t change and the target model can still generalize after one step with GrokTransfer.\\n\\nIf the weaker model has two neurons, GrokTransfer still works and our theory still applies. The weak model will learn two out of four features and its embedding will project the data onto a 2-D space, as shown in Figure 11 Middle (page 26) in the updated version. \\n\\nIf the weak model only has one neuron, GrokTransfer does not work. As shown in Figure 11 Left (page 26), the weak model\\u2019s embedding projects the data onto a 1-D space, where both the +1 and -1 classes are located on both sides of the original point 0, making it impossible for the target model to classify them. Please see more details on page 25. \\n\\n - ***For the modular addition/multiplication problems, what would be the smallest weak model...***\\n\\nWe appreciate the reviewer\\u2019s question. Empirically we find that it\\u2019s sufficient for the target model to perfectly generalize as long as the weak model can achieve a nontrivial test accuracy (e.g. 40%) on modular arithmetic tasks. For example, a two-layer MLP with 4-dim embedding is sufficient. We also found that such a small model can already learn a good embedding with a periodic structure. See Figure 12 (page 27) in the updated pdf. The t-SNE embedding shows the embedding has a structure similar to Fourier embedding with a frequency 1/66.\\n\\nTheoretically, we believe that as long as the model has enough expressivity to extract useful features, it can be used as a weak model. [5,6] both give constructions for 2-layer MLPs on the modular addition task based on the Fourier embedding.\\n\\n\\n- ***For Theorem 3.2, is this in a kernel regime?...***\\n\\nThank you for your question. It\\u2019s not in the kernel regime but instead a feature learning regime, where the learning rate is much larger than the initialization scale. In this task, only training the second layer cannot achieve one-step generalization. We have updated the manuscript to clarify this point.\\n\\n- ***Figure 4: Should I infer from the caption that the test accuracy becomes high...***\\n\\nIn Figure 4, it\\u2019s correct that the test accuracy becomes high around the same point for both smaller and larger models, while the larger model\\u2019s training accuracy becomes perfect more quickly. However, note that the smaller model is much more efficient to train than the larger model due to their size difference. Our main point is that this smaller model\\u2019s embedding can be used to significantly accelerate larger model\\u2019s generalization (i.e., generalizes perfectly in 1 step), and therefore using GrokTransfer can achieve an overall speedup compared with training the larger model from scratch.\\n\\nAs for whether or not the smaller model exhibits grokking, we agree that it isn\\u2019t as clear as the larger model. The delayed generalization phenomenon is still quite clear here, but the gap between training and test curves isn\\u2019t as pronounced. That said, this has no effect on our story, since we only care about using the smaller model to help accelerate grokking in the larger model, and so the training dynamics of the weak model doesn\\u2019t play an important role in the story.\\n\\n- ***L 298: Do you mean that whenever the test accuracy is around 75% or above, ...***\\n\\nYes. Empirically we found that the 3-neuron NN can always achieve ~75% test accuracy as long as it is trained for sufficiently many steps and each time it learns three out of four features [\\\\pm 1, \\\\pm 1]. The specific features they learned depend on its weight initialization and the sampling of training data. We have clarified it in the updated pdf.\\n\\n\\n\\n[1] Benign Overfitting and Grokking in ReLU Networks for XOR Cluster Data.\\n\\n[2] SGD Finds then Tunes Features in Two-Layer Neural Networks with Near-Optimal Sample Complexity: A Case Study in the XOR problem\\n\\n[3] Benign Overfitting in Two-Layer ReLU Convolutional Neural Networks for XOR Data\\n\\n[4] Scaling Laws for Neural Language Models\\n\\n[5] Feature emergence via margin maximization: case studies in algebraic tasks\\n\\n[6] Grokking modular arithmetic\"}", "{\"title\": \"Response to Questions\", \"comment\": \"- ***In section 3.2, could you add more details on getting the SNR with different embeddings?***\\n\\nIn the original embedding, each sample x is a concatenation of signal and noise vectors. The SNR is the ratio of the norm of the signal vector and the norm of noise vector, i.e. \\\\sqrt(2/(\\\\epsilon (p-2))). After training the weak model, we extract its first linear layer W, and apply a linear transformation to the data distribution with W, i.e. x -> W^T x. Wx can be interpreted as the new embedding for x. Since the signals are the first two elements in x, and W has learned this structure, the SNR of W^T x will be larger.\\n\\n- ***The intuition on why the data embedding make a difference on the generalization speed is still not very clear.***\\n\\nIn the XOR task, the learned data embedding projects the original representation to a 3D space with higher SNR, enabling the model to generalize without delay. In the modular addition task, the learned data embedding has a structure similar to a Fourier embedding, which effectively changes the loss landscape.\\n\\n\\n- ***If we take all layers beyond the last linear layer as doing feature engineering, as mentioned in [1], will this claim...***\\n\\nIt\\u2019s a good question. It\\u2019s widely believed that the low-level layers learn fundamental features while the deeper layers learn more complex and task-specific features [3]. Since the weak model may not generalize optimally, we tend to believe it does not learn all the necessary features or may have learned some incorrect features. To minimize the misinformation to be transferred into the target model, we think only transferring the data embedding is a good idea.\\n\\nAs for the reviewer\\u2019s question about whether the claim in Section 3.2 holds for later layers, we note that this claim only applies to the one-hidden-layer setting studied in Section 3, where the hidden layer we look at is already the penultimate layer. We agree that further investigating this for deeper networks is an interesting direction, but it\\u2019s beyond the scope of this work.\\n\\n[1] Omnigrok: Grokking Beyond Algorithmic Data\\n\\n[2] Towards Understanding Grokking: An Effective Theory of Representation Learning\\n\\n[3] Visualizing and Understanding Convolutional Networks\"}", "{\"comment\": \"Thank you for your constructive feedback.\\n\\n- ***MNIST experimental setup uses only 200 training samples***\\n\\nWe have included a MNIST experiment with 1000 training samples in Figure 18 in the updated pdf to align with the setting in Omnigrok [1]. We can see that GrokTransfer outperforms training from scratch and almost eliminates the generalization delay. \\n\\n- ***Presentation of Results***\\n\\nThank you for your suggestion. We have reorganized the manuscript and moved the original Figure 6 to Appendix and highlighted all revised parts with coloring red. Please see the updated pdf for details.\"}", "{\"comment\": \"I find that my concerns have been properly addressed. Based on the paper's theoretical and empirical contributions, I will update my score to 8.\\n\\nWhile I find the current experiments well-executed, I believe the impact of the paper could be further enhanced by extending the experiments to more challenging image classification datasets (with more data samples greater than 1000), such as CIFAR-100, STL-10, or even ImageNet. If the paper is accepted, I encourage the authors to consider conducting such experiments for the camera-ready version. These additional experiments would add significant value to the research, further solidifying its contribution to the field.\"}", "{\"summary\": \"The paper introduces GrokTransfer, a method that expedites grokking by transferring embeddings from a weaker model. Through a simple XOR classification task, the authors offer both theoretical and empirical justification for GrokTransfer. Experiments on other algorithmic tasks show its effectiveness in embedding transfer from FNN to FNN/transformers.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. The authors provide a sensible justification for the core idea of their method through a motivating experiment in Section 2.1.\\n2. They demonstrate its effectiveness both empirically and theoretically, highlighting improvements in computational costs and performance.\\n3. The proposed method is simple and straightforward, and if it can generalize to broader tasks and architectures, it has significant potential.\", \"weaknesses\": \"1. The authors tested three algorithmic tasks\\u2014modular addition, modular multiplication, and the (40,3)-parity task\\u2014for FNN->FNN transfers. However, they provided results for only one task (modular addition) in the FNN->transformers setting. Given that FNN->transformer transfers are more practical and high in potential, there seems to be no reason to exclude modular multiplication and the (40,3)-parity task. Without these experiments, I can't help but think that the generalizability of GrokTransfer is limited.\", \"questions\": \"1. Regarding W1, the FNN->transformer setting was tested on only one task. I would like to see GrokTransfer's performance on more algorithmic tasks in this setting, especially in comparison to GrokFast. According to Table 1, training is very fast with GrokTransfer, so it should be quick to perform experiments on additional tasks (including those not tested in the paper).\\n\\n2. Grokking is known to occur beyond algorithmic data [1], which is already cited in the paper. I would like to see how GrokTransfer performs on non-algorithmic tasks, such as image classification with MNIST, as explored in [1].\\n\\n3. A new paper [2] has been recently released on accelerating grokking through weight transfer based on the Kolmogorov-Arnold (KA) representation theorem. Your approach seems simpler, but the approach in [2] can even handle two non-standard arithmetic tasks\\u2014composition of operations and systems of equations. How does GrokTransfer compare to [2], in terms of theoretical basis, performance, and practicality?\\n\\n[1] Ziming Liu, Eric J Michaud, and Max Tegmark. Omnigrok: Grokking beyond algorithmic data. In The Eleventh International Conference on Learning Representations, 2023.\\n\\n[2] Yeachan Park, Minseok Kim, & Yeoneung Kim. Acceleration of Grokking in Learning Arithmetic Operations via Kolmogorov-Arnold Representation. arXiv preprint arXiv:2405.16658, 2024\\n\\nMinor\\n- Typo at line 255: \\u201cneed [to] learn\\u201d\\n- There are reference mistakes in the paper, concerning arXiv citations that should point to conference proceedings. Here is an example of wrong references:\\n\\nTanishq Kumar, Blake Bordelon, Samuel J Gershman, and Cengiz Pehlevan. Grokking as the transition from lazy to rich training dynamics. arXiv preprint arXiv:2310.06110, 2023.\\n(Published in ICLR 2024)\", \"note\": \"the minor points did not influence my final score.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"comment\": \"Thank you for your responses to my concerns. While I appreciate the efforts to address my comments, I would like to provide some additional feedback and seek clarification on certain aspects:\\n\\nTraining Data Size in Comparison to Omnigrok [1]\\n- I note that my concern regarding the limited experiments with FNN-to-Transformer transitions has been partially addressed. I observed that your MNIST experimental setup uses only 200 training samples, whereas the Omnigrokk paper employed 1000 training samples (refer to page 5 in [1] and official code). Could you elaborate on the rationale for this decision? \\n\\nPresentation of Results\\n- I believe the research community would find the FNN->Transformer experiments more significant than the FNN->FNN experiments. In this regard, I suggest reorganizing the presentation of results. Specifically, consider including the FNN->Transformer version of Figure 6 in the main text, as it represents a key aspect of your contribution. The FNN->FNN results, while informative, could be moved to the Appendix to streamline the narrative and focus on the most impactful findings.\\n\\nHighlighting Revised Sections\\n- To further facilitate the review process, it would be very helpful if the sections of the manuscript that have been revised in response to reviewer comments are clearly highlighted in the PDF (e.g., using color or annotations). This will allow reviewers to quickly locate and evaluate the changes made based on their feedback.\\n\\nThank you again for your thoughtful responses and for the hard work put into addressing the review comments. I would like to update my score after seeing additional responses and revisions from the authors.\"}", "{\"title\": \"Response to Weaknesses\", \"comment\": \"We thank the reviewer for their review and address their comments below. We hope that the response satisfactorily addresses the reviewer\\u2019s questions and that the reviewer will consider raising their score in light of our response.\\n\\n- ***the key observation which is that the data embedding plays a crucial role ...***\\n\\nThank you for the comment. In section 2, we show that good data embedding can effectively change the training dynamics and cause the grokking phenomenon to disappear. This serves as a motivation to propose our GrokTransfer method. The observation that data embedding plays an important role also appeared in [2], where the authors found that generalization coincides with the emergence of periodic structure in the embedding (Figure 1 in [2]).\\n\\nOn the other hand, we do not claim that data embedding is the only reason behind grokking, or that changing the data embedding is the only way to accelerate grokking. We think it is possible that other factors, such as initialization of other layers, also play a role here. But we focus on the embedding layer because it is easy to transfer the embedding across different model sizes and different model architectures, making it more broadly applicable. We think the fact that GrokTransfer works so well across different tasks and different architectures is convincing evidence that changing the data embedding alone is sufficient for eliminating grokking.\\n\\n- ***In addition to simple settings, verification on more complicated tasks would make the finding more general and solid.***\\n\\nThank you for your suggestion. We have added results for the MNIST dataset in Figure 14 (page 27) in the updated pdf, where grokking was observed with large initialization scale in [1]. Following the setting in [1], 200 samples are randomly selected as the training data. We train a three-layer MLP as the weak model and transfer its embedding to the target model, a four-layer MLP, and compare its dynamics with training from scratch, both with alpha=100. Figure 14 shows that GrokTransfer outperforms training from scratch and almost eliminates the generalization delay.\\n\\n- ***In the theoretical analysis, the proof seems a little bit limited because of the choice of very simple setting.***\\n\\nWe agree that our theoretical result is in a simple setting. Note that rigorously analyzing the training dynamics in neural networks is a notoriously difficult theoretical challenge and even simple settings like ours are already difficult to analyze.\\n\\nOur theoretical result is meant to provide some justification and insights about why our algorithm works. A more comprehensive theoretical justification is beyond the scope of this paper.\\n\\n\\n- ***It would be better if the authors make a clearer study on how to choose the \\\"weaker and smaller\\\" model to get the acceleration with good test performance.***\\n\\nThank you for the suggestion. Empirically, we can start from a computationally light model as the weak model and see if it can show some degree of generalization. If not, we can gradually grow the model size until the model shows nontrivial generalization. We will add this discussion to the manuscript.\"}", "{\"summary\": \"The paper studies ways to reduce the \\\"delayed generalization\\\" in grokking, by first learning embeddings in a smaller, faster model where generalization isn't delayed, and then transferring the embeddings to a larger model. This is shown empirically in various examples including modular addition, multiplication and parity with fully-connected nets or transformers. The authors also show theoretically that fast generalization occurs in XOR when applying the GrokTransfer technique on top of a weak solution that was found with only 3 neurons (this solution is shown to be found empirically).\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"Grokking is a surprising phenomenon that leads to seemingly unpredictable outcomes. The proposed GrokTransfer method seems like an empirically effective to mitigate this. The theoretical claims also justify why good embeddings obtained from a small model can help accelerate generalization in a larger model. Overall, this makes the work interesting and significant.\", \"weaknesses\": [\"The theory on XOR would benefit from a better understanding of the \\\"weaker model\\\", which is currently mostly empirical. It would great to have even partial results for this part, even in a simpler setting (e.g. training a single neuron on this data might be tractable and similar to previous work?)\", \"The presentation could be improved in various ways, in particular the notion of \\\"small model\\\" considered isn't very precisely defined and would benefit from clarification. Other aspects that could be discussed further include trade-offs between width and iterations needed to generalize: it seems that smaller models reduced the \\\"delayed generalization\\\" but also slow down convergence in terms of epochs. Perhaps this changes when looking at \\\"compute/flops\\\", and it would be good to include such plots.\", \"See also the questions below.\"], \"questions\": [\"Could you comment on why you chose a weak model with 3 neurons in Section 3.2? Would the story change if you had 4 of them and achieve perfect accuracy? My impression is that the delayed generalization mostly happens when you are heavily over-parameterized and can very quickly memorize all the training data in early epochs, which still wouldn't happen with 4 neurons. Also, would the transfer still work if the weak model had fewer than 3 neurons?\", \"For the modular addition/multiplication problems, what would be the smallest weak model that can lead to good transferrable embeddings (in practice, but also in theory with an appropriate construction?)\", \"For Theorem 3.2, is this in a kernel regime? Would training only the second layer instead give similar guarantees? Regardless, a brief discussion after the statement would be a good addition.\", \"Other, minor:\", \"Figure 4: Should I infer from the caption that the test accuracy becomes high around the same point (near 100 epochs), while the larger model also displays the property that training accuracy is high very quickly? Also, is it reasonable to say that the small model does not exhibit grokking here? Please clarify these points since they seem important for the story.\", \"L 298: Do you mean that whenever the test accuracy is around 75% or above, you found empirically that the neurons consist of three such features? Please rephrase as this is not very clear.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We thank the reviewer for their review. We would greatly value the opportunity to discuss and remain available to address any additional questions or concerns during the discussion period. In light of the clarifications, revisions, and additional evaluations we have provided, we kindly encourage the reviewer to revisit their assessment and consider adjusting their final rating before the discussion period concludes.\"}", "{\"summary\": \"The paper introduces a method to mitigate grokking, which describes the scenario in which models overfit to the training dataset long before it begins to generalize to unseen test data. This is done by training a smaller model with a much lower embedding (number of features in first layer) dimension, which converges quicker and without grokking. The architecture of the original model is then modified by replacing the first embedding layer with the product of two matrices $E_T = AB$ where $A$ and $B$ are of much lower rank. $A$ is initialized with the embedding layer of the smaller model. This model is evaluated on several synthetic tasks -- modular addition, modular multiplication, and a (40,3)-parity task defined as learning $y = \\\\Pi_{i \\\\in \\\\{1,2,3\\\\}} x_i$ where $x \\\\in \\\\{\\\\pm 1\\\\}^{40}$. Experiments show that compared to the original model that is randomly initialized, the resulting model mitigates grokking and converges faster. Theoretical analysis is also provided for a specific case involving learning $y = x_1$ XOR $x_2$ for a 80K dimensional vector $x$.\", \"soundness\": \"2\", \"presentation\": \"4\", \"contribution\": \"2\", \"strengths\": [\"The method is clearly described and easily understandable and implementable.\", \"The combined time for the proposed method appears much faster than the baseline method of training the original target model from scratch. Table 1 suggests it is more than 5x faster. However ablations are needed to determine whether the speedup comes from architectural modifications, or from the proposed method.\", \"For the tasks studied, the model performs well in terms of reducing grokking and accelerating convergence (although comparisons are not apples-to-apples -- see Weakness 1)\"], \"weaknesses\": [\"My main concern is that all the numbers reported for GrokTransfer and the target model (Large) are not directly/fairly comparable. The model used for GrokTransfer parameterizes the first layer with $E_T = AB$ where $A$ and $B$ are (very) low-rank matrices. On the other hand, the original target model uses the full $d_v \\\\times d_T$ matrix which is significantly larger. The number of parameters present in this layer is now much larger than that of $E_T = AB$ . A fair comparison should not even consider target models with the full $d_v \\\\times d_T$ embedding layer, but only those parametrized with $AB$ , where, for instance, $A$ can be randomly initialized instead of being initialized from the weaker model. As such, it is difficult to conclude anything regarding whether the improvement comes from weak model initialization, or simply from architectural modifications, from any of the existing results.\", \"This defeats the purpose of studying/mitigating grokking in the first place, where the goal is to be able to train over-parametrized models with minimal grokking. This method instead replaces the over-parametrized model with one of more manageable complexity, which ties to the next weakness\", \"Method also greatly reduces the expressivity of the original target model, by adding a very low-rank bottleneck to the first layer. While this works for the synthetic closed-form tasks considered in the paper, it suggests that (1) the original target model architecture is clearly ill-suited, since it converges perfectly even after introducing a very low-rank bottleneck, and (2) ability of the method to generalize to more complex tasks which require greater expressivity is limited and questionable.\", \"The general motivation of the method appears to be applying dimensionality reduction to pre-process inputs that have very low SNR. This opens up questions regarding how the method compares to classical techniques like LDA?\", \"Method introduces an additional layer of complexity in the training pipeline, due to having to train an additional model which would require its own architectural and hyperparameter sweeps.\", \"Minor comments on notation: Inconsistent notation switching between $d_{embed}$ and $d_W$ / $d_F$, which makes it hard to look up. Notation on line $y=x_1 x_2$ is also ambiguous, I assume the multiplication operation here is XOR. Also overloaded notation for $p$ for modulo and dimension of input in XOR task.\"], \"questions\": [\"Do the conditions in the theoretical analysis, under which grokking is mitigated for the XOR task, have any implications on what are the causes of grokking? For instance, it possibly suggests several several causes of grokking, including the SNR, overparametrization (A5), initialization (A3), etc.\", \"Are there failure cases of the method? I do not see mentions of them in the limitations.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for the detailed response to my review.\\n\\nThe authors argue that $A$ and $B$ can be merged into the full matrix $E_T$ at the end of training, with minimal effects on the performance. The mentioned Figure 15(b) does not seem to mention what task this is evaluated on, but I assume it is similar from the ones in the original paper where parameterizing $E_T$ as $AB$ is already sufficient to fully learn the (toy) task. This makes the experiment rather self-fulfilling, since a fully parameterized $E_T$ matrix is not even necessary for the task in the first place. Instead, it would be more convincing if the authors were to demonstrate experiments on tasks where the greater expressiveness provided by $E_T$ is necessary for achieving good performance.\\n\\nAlso, while the authors argue that \\\"low-rankness is not a necessary part of our story\\\" and that \\\"the motivation is not to apply dimensionality reduction, since low-dimensionality is not a crucial aspect of our method\\\". Figure 15(c) demonstrates the converse to me. Without the low rank conditions, the main contribution and strength of the method, which is providing large speed-ups in combined training time, disappears. My concern was not that $A$ and $B$ has to be low-rank for the model to converge, but rather $A$ and $B$ has to be low-rank for the method to be effective in terms of accelerating convergence.\"}", "{\"comment\": \"We thank the reviewer for their review. We would greatly value the opportunity to discuss and remain available to address any additional questions or concerns during the discussion period. In light of the clarifications, revisions, and additional evaluations we have provided, we kindly encourage the reviewer to revisit their assessment and consider adjusting their final rating before the discussion period concludes.\"}", "{\"title\": \"Global Response\", \"comment\": [\"We appreciate the reviewers' feedback! In this response, we summarize our additional experiments, which have been incorporated into the updated PDF.\", \"**Section A.4.1**:\", \"We include an estimation of the FLOPs required for the forward pass in the weak and target models within the FNN-to-Transformer scenario (Figure 8, Left). This demonstrates that the FLOPs of the weak model can be significantly smaller than those of the target model.\", \"**Section A.4.2**:\", \"We present additional experiments on XOR cluster data, the modular multiplication task, and image classification on MNIST, where grokking was observed with large initialization scales [1].\", \"For the **XOR cluster data** case study, we analyzed the minimum number of neurons required in the weak model to ensure near-perfect generalization for the target model (Figures 10, 11).\", \"For the **modular addition task**, we visualized the weak model\\u2019s embedding and observed a periodic structure resembling Fourier embeddings (Figure 12).\", \"For the **FNN-to-Transformer scenario**, we added experiments on modular multiplication and compared the results with GrokFast (Figures 13 Left, 16).\", \"**Figure 14**: We applied GrokTransfer to the MNIST dataset and observed an improvement in generalization speed.\", \"**Figure 15**: We compared the dynamics of three configurations: randomly initializing both models $A$ and $B$, merging $A$ and $B$ into $A\\\\times B$, and setting the embedding dimension of the weak model equal to that of the target model. These comparisons highlight that the success of our method is not from the low-rankness of the embedding layer but from the information encoded in the weak model's embedding.\", \"[1] Omnigrok: Grokking Beyond Algorithmic Data\"]}", "{\"title\": \"Response to Weaknesses\", \"comment\": \"We thank the reviewer for the detailed review. It is encouraging that the reviewer thinks our work is interesting and significant. We hope that our response satisfactorily addresses the reviewer\\u2019s questions/concerns.\\n\\n- ***The theory on XOR would benefit from a better understanding of the \\\"weaker model\\\", ...... e.g. training a single neuron on this data might be tractable and similar to previous work?)***\\n\\nWe appreciate the question regarding the theoretical analysis of the weak model. In the single neuron case, the trajectory analysis is similar to that in [1]. Specifically, the neuron will gradually align with one of the four feature directions (the direction with the most samples in the training data) and will only be able to learn that feature due to model expressivity. Consequently, the test accuracy will always be around 25% and the training accuracy never exceeds 50%, which is verified empirically in Figure 10 Left and Middle (page 26) in the updated manuscript. The two or three-neuron cases are significantly more challenging to analyze and are beyond the current toolkit in deep learning theory, mainly due to the complex interactions between different neurons. Another type of setting we know how to analyze is when the number of neurons is sufficiently large (e.g., at least logarithmic on the sample size); we did not focus on such settings because we wanted to highlight that the weaker model doesn\\u2019t have to generalize perfectly.\\n\\n\\n- ***The presentation could be improved in various ways, in particular the notion of \\\"small model\\\" considered isn't very precisely ...... Perhaps this changes when looking at \\\"compute/flops\\\", and it would be good to include such plots.***\\n\\nThank you for your suggestion. Here \\u201csmaller model\\u201d generally refers to a model with relatively smaller expressivity than the target model. This is similar to what has been studied in the contexts of distillation and weak-to-strong generalization. We have updated the pdf to include this clarification. \\n\\nThe number of epochs needed by the weak model to generalize does increase compared to that of the target model. However, the wall-clock time has a significant decrease compared to training the target model from scratch. In Table 1 (page 10), we see that the wall-clock time used to train the weak model plus that to train the target model with GrokTransfer is much smaller than that of training the target model from scratch, thus boosting the generalization speed in terms of time. We didn\\u2019t include compute/flops because it is very hard to measure the flops for transformers in the backward pass when using CUDA. For the forward pass, we follow the formula in Table 1 in [4] and calculate the flops for Figure 8 Left. The flops of GrokTransfer is around 1e10, and the flops of training from scratch is around 1e11. The flops of training the weak model is around 1e7, which is negligible compared to the flops of training the target model. Please see more details in Section A.4 in the updated pdf.\"}", "{\"summary\": \"This paper investigates \\\"grokking,\\\" a phenomenon where neural networks initially struggle to generalize, only to suddenly achieve near-perfect generalization after extended training. To accelerate generalization, the authors propose GrokTransfer. GrokTransfer leverages the embedding from a smaller, preliminary model that achieves moderate test performance, transferring it to a stronger target model to speed up generalization. This approach successfully eliminates delayed generalization on a synthetic XOR task and performs well across various algorithmic tasks on both fully connected networks and Transformers. The results suggest that GrokTransfer could reshape training dynamics, enabling models to generalize without delay.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"2\", \"strengths\": [\"The proposed method is interesting and novel to my knowledge. It first trains an under-parameterized model that is incapable of perfectly interpolating the data, and then uses the learned data embedding to facilitate the grokking of an over-parameterized model. It\\u2019s like approaching a problem by first crafting a simple but general solution that works well for most cases and then refining the solution to cover all the rest of the cases. Intuitively, this eliminates a lot of unnecessary competition among various not-so-simple solutions in an over-parameterized model, hence accelerating the learning process.\", \"The paper is very well written. It includes an extensive discussion of the related work, a clear presentation of the motivation behind the proposed method, and a detailed case study that explains how and why the method works.\"], \"weaknesses\": [\"It is unclear how well the method would generalize and scale to more complicated problems where such acceleration can make a real impact. The algorithmic problems are useful for analysis but are too simple in comparison with real-world problems which often involve high-dimensional inputs. For high-dimensional inputs with many redundant features, it is likely for the weaker model to lock onto degenerate solutions that would hinder the stronger model from grokking. It would be interesting to see if this is really the case or not.\", \"The paper does not provide much insight on how to design or choose the weaker model. Clearly, the weaker model can neither be too weak nor too strong, i.e. there is a trade-off. If it is too weak, the solution may degenerate and thus make it harder for the stronger model to grok. If the weaker model is too strong, then it may take too much time to train the weaker model. Therefore, for this method to be truly useful, there should be some general rule of thumb for choosing the weaker model, otherwise, much time could be wasted on trial and error.\", \"As the authors have mentioned in the paper, the theoretical result only considers a relatively simple XOR task. There does not seem to be any clear indication if the analysis could potentially be applied to more general problems. Therefore, the significance of this result is in doubt.\"], \"questions\": [\"Would a \\u201csmoother\\u201d version of the proposed method work even better? For example, one could start from a single embedding layer with a small number of neurons and gradually deepen/widen it, i.e., adding more layers and adding more neurons to each layer.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Weaknesses\", \"comment\": \"We thank the reviewer for their review and address their comments below. We hope that the response satisfactorily addresses the reviewer\\u2019s questions and that the reviewer will consider raising their score in light of our response.\\n\\n- ***My main concern is that all the numbers reported for GrokTransfer and the target model (Large) are not directly/fairly comparable.......***\\n- ***This defeats the purpose of studying/mitigating grokking in the first place, ......***\\n- ***Method also greatly reduces the expressivity of the original target model, by adding a very low-rank bottleneck to the first layer.......***\\n\\n\\nWe would like to clarify that the low-rank bottleneck is not a crucial aspect of GrokTransfer.\\n\\nFirstly, we found that $A$ and $B$ can be merged into the full matrix $E_T$ later on during training, and $E_T$ can be further trained to convergence. This has little influence on the performance, and doing this will not impose any low-rank constraint. We have included this experiment in **Figure 15(b)** (page 28), where we first train the target model with low-rank embedding for 100 epochs, merge $A, B$ into $E_T$ and keep training.\\n\\nSecondly, if we directly train the low-rank factorization $AB$ from random initialization (as suggested by the reviewer), it will not work. This implies that the success of GrokTransfer is not due to the low-rank constraint, but rather the specific embedding being transferred. We have run the experiments for the target model with low-rank embedding and both A,B are randomly initialized, and show the results in **Figure 15(a)** (page 28) in the updated pdf. The config was selected by grid search the same as those in Figure 6 and 8. We randomly initialize A instead of transferring from the weaker model. We run the experiments for both MLP and transformers and find that the target model fails to perfectly generalize in 4000 epochs, while both the target model trained by GrokTransfer and trained from scratch can perfectly generalize in 4000 epochs.\\n\\nThirdly, we wish to mention that $A, B$ are not necessarily low-rank matrices. In **Figure 15(c)**, we show that with full-rank $A, B$ (i.e., when $d_W=d_T$), GrokTransfer still archives the same acceleration effect. We used low-rank matrices in most of our experiments only because we wanted to show that even when the weak model has a low-dimensional embedding and cannot generalize perfectly, its embedding can still enable the target model to perfectly generalize with much faster speed.\\n\\nOur intuition is that initializing the embedding layer with an informative embedding would help build a good landscape when minimizing the loss function. **We wish to emphasize that low-rankness is not a necessary part of our story.**\\n\\n- ***The general motivation of the method appears to be applying dimensionality reduction to pre-process inputs that have very low SNR......***\\n\\nWe\\u2019d like to clarify that the motivation is not to apply dimensionality reduction, since low-dimensionality is not a crucial aspect of our method. Instead, the motivation is to use the embedding learned in the weaker model to accelerate learning in the target model. As for why such embedding is helpful, we believe that it can differ between different problems. \\n\\nIn the XOR problem, it is indeed true that the learned embedding is effectively performing a dimensionality reduction to improve the SNR. However, this mechanism doesn\\u2019t necessarily apply to other problems. For example, modular addition has noise-free data, and a learned embedding is still very useful for eliminating grokking. The general motivation of our method and a unifying view across different settings is that an informative embedding is crucial.\\n\\n- ***Method introduces an additional layer of complexity in the training pipeline, due to having to train an additional model which would require its own architectural and hyperparameter sweeps.***\\n\\nWe agree that training and tuning an additional weak model makes the pipeline more complex. That said, we found that our method is quite robust to architectures and hyperparameters, and in our experiments it\\u2019s usually easy to find a weak model that works. The total computation cost is still much lower than training the target model from scratch.\\n\\n- ***Minor comments on notation...***\\n\\nThank you for pointing out these issues. In Section 2.2, we use $d_W$ and $d_T$ to represent the embedding dimension of the Weak and Target model to distinguish the difference. We have clarified this in the updated pdf.\\n\\nFor $y=x1x2$, we did mean to say $y=x1 * x2$, like $(p,2)$-parity task. We call it XOR distribution because opposite clusters share the same label and the projection of the data distribution looks like a XOR (see Figure 5a or Figure 11 middle for example).\\n\\nYou are right that the notation p is overloaded. We will fix this in the next version.\"}", "{\"title\": \"Response to Questions\", \"comment\": \"- ***Do the conditions in the theoretical analysis, under which grokking is mitigated for the XOR task, have any implications on what are the causes of grokking? For instance, it possibly suggests several causes of grokking, including the SNR, overparametrization (A5), initialization (A3), etc.***\\n\\nFor the XOR task, SNR is the important cause of grokking. However, we don\\u2019t think this applies to other problems. Our work suggests a more general view that a good input embedding is an important factor that controls grokking across different problems. For the XOR problem, a good embedding turns out to be one that increases SNR.\\n\\nMost conditions in the theoretical analysis are the same as previous works [1,2,3] studying benign overfitting. Some are used to guarantee near-perfect generalization, e.g. SNR; some are technical conditions that come from concentration inequalities, e.g. (A5), and trajectory analysis (A3). Empirically, we find that grokking is neither caused by model structure nor hyperparameters such as learning rate, weight decay. Grokking is more related to the data structure (embedding), and the target function to be learned.\\n\\n- ***Are there failure cases of the method? I do not see mentions of them in the limitations.***\\n\\nThank you for your question. A limitation of our method is that if a given problem lacks a significantly smaller model capable of achieving nontrivial generalization, GrokTransfer would not reduce computational costs and may provide limited utility. We will incorporate this discussion into the manuscript.\\n\\n\\n\\n[1] Benign Overfitting and Grokking in ReLU Networks for XOR Cluster Data.\\n\\n[2] Benign Overfitting in Linear Classifiers and Leaky ReLU Networks from KKT Conditions for Margin Maximization\\n\\n[3] Benign Overfitting without Linearity: Neural Network Classifiers Trained by Gradient Descent for Noisy Linear Data\"}", "{\"comment\": \"I thank the authors for carefully addressing my concerns. The MNIST experiment shows some promising signs of the scalability of the method, which is good, but I was thinking about something more realistic and challenging (e.g., ImageNet; natural language modeling). I understand that such experiments may be too much to ask during rebuttal, but still, it would be a big plus if the authors could show that their method works at a scale and complexity closer to real-world applications. Maybe this can be done in the future. For a preliminary study, the current result seems to suffice.\\n\\nFor the second point, it makes perfect sense to start from a computationally light model and gradually grow the model to a larger size until it shows nontrivial generalization. However, a key issue lies in the notion of \\\"nontrivial generalization\\\". For modular arithmetic tasks, 40% validation accuracy is sufficient, but this is only known after training the target model. For other tasks, does 40% accuracy also suffice? Could some require more, let's say, 80% accuracy? I think the paper would benefit from more empirical analysis of the relation between the accuracy of the weak model and the time required to train the target model.\"}" ] }
4qygYXJc0V
Accurate Forgetting for All-in-One Image Restoration Model
[ "Xin Su", "Zhuoran Zheng" ]
Privacy protection has always been an ongoing topic, especially for AI. Currently, a low-cost scheme called Machine Unlearning forgets the private data remembered in the model. Specifically, given a private dataset and a trained neural network, we need to use e.g. pruning, fine-tuning, and gradient ascent to remove the influence of the private dataset on the neural network. Inspired by this, we try to use this concept to bridge the gap between the fields of image restoration and security, creating a new research idea. We propose the scene for the All-In-One model (a neural network that restores a wide range of degraded information), where a given dataset such as haze, or rain, is private and needs to be eliminated from the influence of it on the trained model. Notably, we find great challenges in this task to remove the influence of sensitive data while ensuring that the overall model performance remains robust, which is akin to directing a symphony orchestra without specific instruments while keeping the playing soothing. Here we explore a simple but effective approach: Instance-wise Unlearning through the use of adversarial examples and gradient ascent techniques. Our approach is a low-cost solution compared to the strategy of retraining the model from scratch, where the gradient ascent trick forgets the specified data and the performance of the adversarial sample maintenance model is robust. Through extensive experimentation on two popular unified image restoration models, we show that our approach effectively preserves knowledge of remaining data while unlearning a given degradation type.
[ "Image Restoration; Privacy Protection" ]
https://openreview.net/pdf?id=4qygYXJc0V
https://openreview.net/forum?id=4qygYXJc0V
ICLR.cc/2025/Conference
2025
{ "note_id": [ "HkCwy5jcmh" ], "note_type": [ "comment" ], "note_created": [ 1728629364570 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission118/Authors" ] ], "structured_content_str": [ "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}" ] }
4qh6nurdYt
Effective Learning with Node Perturbation in Multi-Layer Neural Networks
[ "Sander Dalm", "Marcel van Gerven", "Nasir Ahmad" ]
Backpropagation (BP) remains the dominant and most successful method for training parameters of deep neural network models. However, BP relies on two computationally distinct phases, does not provide a satisfactory explanation of biological learning, and can be challenging to apply for training of networks with discontinuities or noisy node dynamics. By comparison, node perturbation (NP) proposes learning by the injection of noise into network activations, and subsequent measurement of the induced loss change. NP relies on two forward (inference) passes, does not make use of network derivatives, and has been proposed as a model for learning in biological systems. However, standard NP is highly data inefficient and unstable due to its unguided noise-based search process. In this work, we investigate different formulations of NP and relate it to the concept of directional derivatives as well as combining it with a decorrelating mechanism for layer-wise inputs. We find that a closer alignment with directional derivatives together with input decorrelation at every layer strongly enhances performance of NP learning with large improvements in parameter convergence and much higher performance on the test data, approaching that of BP. Furthermore, our novel formulation allows for application to noisy systems in which the noise process itself is inaccessible.
[ "efficient machine learning", "optimization" ]
Reject
https://openreview.net/pdf?id=4qh6nurdYt
https://openreview.net/forum?id=4qh6nurdYt
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zoIfTytqz4", "vAmv2fu7Ee", "jqvJGeQXIT", "cJgjZMKZD0", "YzWK3X0jyV", "PEJkPPZTRN", "P1mziOlYys", "MwhofPoSxX", "MnwEUSYjnT", "BLsuKC0VHD", "6rKzDG0NxC", "6mgaO8hd4h", "1N1sqwqTi6" ], "note_type": [ "meta_review", "official_comment", "decision", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review" ], "note_created": [ 1735000286394, 1732622102236, 1737524035535, 1732621801260, 1732886306596, 1730149437061, 1732654366307, 1730692180778, 1732622756207, 1732622354548, 1732740313103, 1732679920748, 1730649500240 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission10238/Area_Chair_w5k3" ], [ "ICLR.cc/2025/Conference/Submission10238/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission10238/Authors" ], [ "ICLR.cc/2025/Conference/Submission10238/Reviewer_smGL" ], [ "ICLR.cc/2025/Conference/Submission10238/Reviewer_wuaz" ], [ "ICLR.cc/2025/Conference/Submission10238/Reviewer_wuaz" ], [ "ICLR.cc/2025/Conference/Submission10238/Reviewer_nexo" ], [ "ICLR.cc/2025/Conference/Submission10238/Authors" ], [ "ICLR.cc/2025/Conference/Submission10238/Authors" ], [ "ICLR.cc/2025/Conference/Submission10238/Authors" ], [ "ICLR.cc/2025/Conference/Submission10238/Reviewer_nexo" ], [ "ICLR.cc/2025/Conference/Submission10238/Reviewer_smGL" ] ], "structured_content_str": [ "{\"metareview\": \"This submission addresses the important topic of exploring alternatives to backpropagation (BP) by investigating node perturbation (NP) frameworks with biologically plausible learning algorithms. While the paper has notable strengths, including a clear exposition of the methods and improvements to NP through iterative (INP) and activity-based (ANP) formulations, reviewers identified some weaknesses, see below.\", \"additional_comments_on_reviewer_discussion\": \"Key weaknesses include unresolved concerns about the bias in the ANP update rule, which is claimed to provide a \\\"principled approach\\\" despite being theoretically less aligned with BP than standard NP under certain conditions. Furthermore, the empirical results, while promising, lack robust validation against alternative learning rates and stronger baselines, raising questions about the generalizability of the findings. The claims around biological plausibility remain speculative without concrete connections to biological mechanisms.\"}", "{\"title\": \"Response to reviewer's comments\", \"comment\": \"We thank the reviewer for their careful and constructive feedback.\\n\\n__Regarding the scalability limitations__\\nThough ANP might prove useful as an explanation for certain aspects of biological learning or as a template for learning in other noisy hardware, the experiment on Tiny ImageNet using multiple noise iterations was intended as a proof of concept, showing that, in principle, (D)INP converges to BP\\u2019s performance if given enough computation. Our claim is not that parallelization is plausible in biological networks.\\n\\n__Regarding gradient approximation__\\nWe have added an experiment to explore this topic. For NP, ANP and INP, we replaced Figure 2 with an experiment showing the alignment of the update with BP as a function of the number of noise samples. In Appendix G we also show these data as a function of the number of noisy forward passes.\\n\\n__Regarding decorrelation analysis__\\nThough decorrelation can be seen as an extra preprocessing step in learning, note that the decorrelation is provided by a simple matrix multiplication, which is a linear operation. The decorrelated forward pass can therefore be expressed as y = Ax, where A is RW, with R the decorrelation matrix and W the forward weights. Therefore, the decorrelated matrix and forward matrices can be combined in inference and models therefore have the exact same parameterization as a regular forward model. The benefit of decorrelation does not come from more steps or parameters being added, but from a more efficient learning process from the decorrelated features space.\\n\\nBP\\u2019s modest test accuracy benefit from decorrelation in Figure 4 (top) might well be explained by BP already performing well without decorrelation, causing test accuracy to run into the limit of the network architecture. Note that Figure 4 (top) does show a large test accuracy benefit for the NP-style algorithms, suggesting that decorrelation does not simply lead to overfitting.\\n\\n__Regarding learning in deep layers__\\nWe would like to emphasize that learning in the 3 hidden layer networks was significantly better than learning in the single layer networks and performance for DINP was best in the 6 hidden layer network. Significant learning was also seen on the Tiny ImageNet task, using 5 hidden layers. These results make it implausible that only the output layer was learning significantly.\\n\\n__Regarding variance and convergence of the loss gradient__\\nThis issue is now explored more in the additional experiment, showing alignment with BP as a function of the number of noise samples in Figure 2. \\n\\n__Regarding impact of the decorrelation step__\\nThough in the smallest networks performance of DNP, DANP, and DINP does not differ much, in the case using multiple noisy passes (Figure 5) and the Tiny ImageNet experiments (Figure 6) significant performance differences emerge. Moreover, these differences align with the theoretical properties of ANP and INP as well as with the alignment analysis in Figure 2, showing that ANP and INP provide benefit in addition to decorrelation.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Response to reviewer's comments\", \"comment\": \"We thank the reviewer for their careful and constructive feedback.\\n\\n__Regarding bias in ANP and NP__\\n\\nIndeed Hiratani et. al. (2022) show NP to be unbiased, but only under certain assumptions which we find unrealistic, namely the assumption of a linear network, or infinitely small perturbations which would essentially linearize the system.\\n\\nUnder more realistic assumptions of non-linearity and small but measurable perturbations, it can be shown that NP and ANP are both biased, but with ANP having a more local and interpretable source of bias. We demonstrate this based upon a Taylor Series expansion of a similar derivation as you have used and added an Appendix D to explain this. Ultimately, a Taylor expansion shows that in the case of a finite sized perturbation (which is always the case when implementing NP practically) updates under NP contain error terms related to downstream impacts of noise which ANP does not. Furthermore, in the case of decorrelated perturbation state differences, ANP is less biased. \\n\\n__Regarding the convolutional implementation__\\n\\nThe convolutional implementation uses a built-in convolutional layer (e.g. from TensorFlow or PyTorch) and then adds noise to all of its outputs independently. To compute the weight update, image patches are extracted from the input using the same convolutional kernel dimensions as used in the layer object, after which the updates are first computed per patch and then averaged over patches.\"}", "{\"title\": \"Response to the authors' rebuttal\", \"comment\": \"1. Biological plausibility: your answer is not satisfying, as it still means this method will not scale well in biological settings.\\n2. Thank you for the additional analysis; it has improved my understanding. However, this does not exactly answer my question since performing many \\u201cepochs\\u201d is not the same as averaging over several noisy passes for each update. This is because the noise is within the nonlinearity. \\n3. Decoration: I accept your arguments.\\n4. Learning in the deep layer. I still think that the last layers do most of the learning. However, I agree that there is a noticeable difference in the shallower layers.\\n\\nI still think this work is interesting and above the threshold for acceptance. However, I am not yet convinced it is a strong accept.\"}", "{\"summary\": [\"In this paper, the authors extend an existing framework to train multi-layer neural networks without using backpropagation (BP). Node perturbation (NP) relies on injecting noise in the network layers, measuring the in the loss differential between the clean and noisy pass, and computing a layer-wise update which relies on the noise vector, the pre-synaptic activity and the loss differential. The authors improve on the NP framework with three main contributions:\", \"In the traditional NP approach, the effect of the layer $\\\\ell$ noise $\\\\epsilon_\\\\ell$ on the downstream layers is unaccounted for. By computing the directional derivative of the loss on the noise injected at layer $\\\\ell$ ($\\\\nabla_v \\\\mathcal{L}$), they can more precisely target the updates to layer $\\\\ell$. This method, referred to as iterative node perturbation, requires $L+1$ forward passes, and relies on access to the noise vector.\", \"Next, the authors propose activity-based noise perturbation (ANP). This approach relies on the assumption that all the layers are independent, which requires measuring the state difference between clean and noisy passes instead of the noise injected. This requires only two passes and does not require access to the noise signal, but rather its effect on the network.\", \"Lastly, using an existing trainable decorrelation procedure, they show improved performance of their proposed algorithms by decorrelating the inputs to each layer.\", \"These variations of NP are tested fully connected and convolutional networks on CIFAR-10 and Tiny ImageNet and compared to BP.\"], \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"I believe the paper is well structured and well written, the results are interesting and overall well defended. In detail:\", \"The methods section is clear and straightforward. I appreciated the incremental structure that starts from Node Perturbation ad adds on the newly proposed variations explaining well the contribution of each piece.\", \"The results prove the claims of the authors regarding how each variation of NP compares, and I appreciate using Tiny ImageNet as a benchmark which is more challenging of what is usually found in these types of papers.\", \"I found section 3.4 very interesting, as most methods I know of rely on having non-noisy systems. I think removing the assumption of having a clean and noisy pass, and assuming all passes are noisy makes it a very interesting algorithm.\"], \"weaknesses\": [\"I am giving this paper a 6 because of the following weaknesses I found, but I would be happy to re-evaluate if these concerns are addressed. My main concerns are:\", \"The world bioplausible is mentioned throughout the papers, but details on how these algorithms could be implemented in biological neural networks are not provided. I believe the paper still stands without needing to justify it as bioplausible, so I believe that either removing the bioplausibility aspect (and exchange it with more details on possible hardware implementation) or providing more details about the bioplausibility would be better alternatives.\", \"In the last years, many alternatives to back propagation that rely on multiple forward passes have been proposed. In particular, looking at equation (6) in the paper. For example, Dellaferrera and Kreiman, 2022, proposes to use the differential between a clean and \\\"noisy pass\\\" to train the network, where the noisy pass relies on a perturbation of the activity computed using the loss differential. Since there are many other algorithms like this (for example Hinton 2022, Kohan et al. 2018, Frenkel et al 2021), I believe a comparison in the methods section would be beneficial.\", \"The figures position could be improved to make the paper more readable. For example Figure 1 could be pushed up in the paper as it summarizes the contributions of the paper, Figure 4 could be moved in section 3.3.\", \"I think figure 2 could be improved. For example figures 2 right does not add anything that cannot be explained in words.\"], \"questions\": [\"These questions/suggestions mostly add on top of what I mentioned in the weaknesses:\", \"What happens to the update angle during the training. Do you observe a better alignment close to convergence? If this is interesting it could be added to Figure 2.\", \"I am pretty surprised about the results with a fully connected network trained with BP, as it is pretty common to see these networks obtaining >60% accuracy on CIFAR-10. Could you elaborate more on the choice of hyperparameters?\", \"I believe is always important to include a code repository in these papers as it helps the other researchers in the field and makes the experiments easier to reproduce.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"I want to thank the reviewers for addressing my concerns. I read the revised version of the paper and I believe that most of my concerns have been addressed. Regarding the BP benchmark, I have read a number of papers which achieve ~60% accuracy on CIFAR-10 on 1-hidden layers network. Although I do not think this is extremely important for the paper itself, I just want to make sure the comparison is done rigorously.\\n\\nAlthough I will wait to read the discussion with the other reviewers to update my score, as of now, I would be happy to recommend this paper for acceptance.\"}", "{\"summary\": \"In this manuscript, the authors propose an improved algorithm for node perturbation (NP) with two key modifications compared to the standard NP. First, the weight update is calculated using the total change in activity at each hidden node instead of the direct perturbation at that node. Secondly, the algorithm incorporates a decorrelation step at each layer to minimize noise correlations. The authors demonstrate numerically that decorrelation robustly enhances the performance of NP in deep neural networks trained on the CIFAR-10 dataset. They also show that using the total change in activity for updates outperforms the vanilla NP in convolutional neural networks.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The manuscript is clearly written and well-motivated. Moreover, the numerical experiments convincingly demonstrate that decorrelation improves the performance of NP.\", \"weaknesses\": \"Section 2.1.2, especially L127-L128, provides an impression that INP is required to make the update unbiased. However, the vanilla node perturbation is known to converge to the true backpropagation at the infinite sample limit if the noise added to each layer is independent of each other (Fiete et al., 2007; Hiratani et al., 2022).\\n\\nOn the contrary, the ANP update rule is biased against back-propagation. This can be demonstrated in a one-hidden layer non-linear network $y = W_2 f (W_1 x)$ with a loss function L(y). \\nGiven perturbations $h v_1$, $h v_2$, at $h \\\\to 0$ limit, the ANP update for the second layer is\\n$$\\\\begin{eqnarray}\\n\\\\Delta W_2 \\n&=& \\\\frac{1}{h} \\\\langle (L(\\\\tilde{y}) - L(y)) (v_2 + W_2 [f'(W_1 x) \\\\odot v_1] ) f(W_1 x)^T \\\\rangle \\n\\\\nonumber \\\\\\\\\\\\\\\\\\n&=& \\\\langle (v_2 + W_2 [f' (W_1 x) \\\\odot v_1] ) (v_2 + W_2 [f' (W_1 x) \\\\odot v_1] )^T \\\\rangle \\\\frac{\\\\partial L(y)}{\\\\partial y} f (W_1 x)^T \\n\\\\nonumber \\\\\\\\\\\\\\\\\\n&=& (I + W_2 \\\\text{diag} [ f'(W_1 x)^2 ] W_2^T) \\\\frac{\\\\partial L(y)}{\\\\partial y} f(W_1 x)^T\\n\\\\end{eqnarray}$$\\nBecause the true gradient is \\n$\\\\frac{\\\\partial L(y)}{\\\\partial W_2} = \\\\frac{\\\\partial L(y)}{\\\\partial y} f(W_1 x)^T$,\\nthe ANP update rule above is biased against the true gradient, implying that the claims made in section 2.1 are inaccurate. \\n\\nGiven this bias, it is unclear why ANP achieves better alignment with backpropagation in Figure 2. Nevertheless, biased updates can sometimes facilitate faster learning, as discussed in Song, Millidge, et al., Nature Neuroscience, 2024. Therefore, the results shown in the bottom panel of Figure 4 are potentially interesting.\\n\\nRegarding comparison with BP, empirical results are presented in a somewhat misleading way. A key limitation of NP is its need for a low learning rate, which means performance comparisons with BP should be conducted across a wide range of learning rates.\", \"questions\": \"Could you elaborate on the implementation details in convolutional networks, particularly regarding how weight sharing was handled?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Updates to paper based on reviewer's comments\", \"comment\": [\"We again thank our reviewer's for their constructive feedback. Several changes have been made to the paper:\", \"The introduction now mentions some of the suggested literature regarding other bio plausible algorithms.\", \"Figure 2 has been replaced by a figure that measures the alignment of NP, ANP and INP updates w.r.t. BP as a function of the number of noise iterations, to give more insight into both the scaling properties of the algorithms and the degree to which theoretical properties of the algorithms play out in practice.\", \"An Appendix D has been added, providing an in-depth mathematical analysis showing ANP to be less biased than NP.\", \"An Appendix G has been added, showing the same data as Figure 2, but as a function of the number of noisy forward passes for each algorithm, essentially comparing their alignment per unit of computation.\"]}", "{\"title\": \"Response to the reviewer's comments\", \"comment\": \"We thank the reviewer for their careful and constructive feedback.\\n\\n__Regarding the details of bioplausibility__\\nOur aim in this work was not to implement a complete biologically \\u2018detailed\\u2019 algorithm per se, but to study a gradient-free learning method with several possible applications and implementations. DANP, in particular, is able to learn in systems which do not have a noise-free baseline, based upon a global feedback signal, which is more akin to how biological brains learn and is also applicable to certain types of hardware.\\n\\n__Regarding other relevant work__\\nThough our paper specifically investigates noise-based algorithms, a brief mention of other bio-plausible algorithms has now been added to the introduction, to contrast them with these noise-based algorithms.\\n\\n__Regarding figure layout and the usefulness of Figure 2__\\nFigure 2 has been replaced by a more in-depth figure, exploring the alignment of NP, ANP and INP with BP as a function of the number of noisy forward passes. For the camera ready version we will have a deeper look at layout issues.\\n\\n__Regarding update angle during training__\\nWe have added a new experiment studying the update angle as a function number of noise iterations again in Figure 2 to shed some more light on the update angles of the various algorithms.\\n\\n__Regarding performance levels on CIFAR__\\nPerformance in all experiments slightly lagging behind established benchmarks is likely due to the simple networks used. The aim of this work was to compare the convergence properties of algorithms on a decently challenging task, not to achieve any set level of benchmark performance. We selected learning rates for each experiment separately based on a grid search, see Appendix F. Other hyper-parameters, like noise magnitude, did not seem to affect performance much as long as they stayed within a reasonable range. This is described in Appendix H.\\n\\n__Regarding code repository__\\nAn implementation of the algorithms will be made available upon acceptance.\"}", "{\"title\": \"Response to reviewer's secondary comments\", \"comment\": \"__Regarding bias in NP and BP__\\nThank you for your further comments and more detail on your perspective. In regards to our acknowledging of inherent bias in our ANP formulation relative to BP, we understand your concern. To mitigate this, we have modified our paper by adjusting the sentences which you indicated. Note that our claim of a \\u2018more principled approach\\u2019 on L133 and \\u2018solid foundation\\u2019 were both comments made in respect to (and in sections regarding) INP, which we resolutely stand by. INP does not require any consideration for the impact of propagated noise. However, we have modified our claims in the methods section on ANP to directly point out that:\\n`However, relative to BP our derived ANP rule has a biased estimation in gradient measurement. Appendix D describes this bias and shows that, when considering finite noise injection, this bias is measurable in closed form and is interpretable.`\\nand\\n ` \\u2026 upon averaging across many samples can approximate the true gradient with a distinct bias based upon correlations in propagated noise.`\\n\\nBeyond this, our explicit naming of the Hessian term was perhaps unfair. We have updated the text to refer to `...beginning with the Hessian but also including third, fourth and higher derivative terms.` In general, one cannot say the induced bias by all of these terms is zero - especially given that even a simple non-linearity such as ReLU applied to random noise (with zero mean) will produce an output with non-zero mean. \\n\\nFurthermore, we acknowledge that $$\\\\left\\\\langle (\\\\tilde{y}_2 - y_2)(\\\\tilde{y}_2 - y_2)^T \\\\right\\\\rangle$$ is not diagonal in general. Our goal was to point out that its bias is well described, as opposed to an infinite series of Taylor expansions, and by inclusion of a decorrelation matrix (on top of a weight matrix W) one could try to optimize for this term to become diagonal. Text in the last sentence of Appendix D has been updated to better reflect our intended message such that we now only claim `This is a much more interpretable bias term which could even be targeted directly to be made diagonal if a practitioner was interested in doing so.`.\\n\\n__Regarding learning rates for NP and BP__\\nIn this work, we actually optimized learning rates for all algorithms separately to ensure a fair comparison. For the single layer case, it just happened that the optimal learning rate was identical for BP and NP.\"}", "{\"comment\": \"I appreciate the authors' efforts in revising the manuscript. However, the revised version still fails to address my primary concerns regarding its technical soundness.\\n\\nMy main issue is that the proposed learning rule remains biased with respect to BP, even in the infinitesimal noise limit. This contrasts with the standard node perturbation (NP) rule, which is unbiased with respect to BP. While bias in the learning rule may not be inherently problematic, it is concerning that this significant limitation is not acknowledged in the manuscript. Instead, the authors claim their proposed rule is a \\\"more principled approach to node perturbation\\\" (Line 133) and has \\\"a more solid theoretical foundation\\\" (Line 67) without clear justification.\\n\\nFurthermore, the arguments presented in the newly added Appendix D are mathematically inaccurate. The authors assert that \\u201cthere are terms involving the correlation between noise vector $v_2$ and higher order terms involving $v_1^\\u22a4 v_1$ and the network activity Hessian. We cannot, in general, say anything about the unbiasedness of these terms and these are sources of unexplained error in existing work.\\u201d However, the term they reference, $\\\\left\\\\langle v_2 (v_1^T \\\\nabla^2_{y_1} y_2 v_1)^T \\\\right\\\\rangle$, is zero if $v_1$ and $v_2$ are independently sampled zero-mean random variables. Therefore, it doesn\\u2019t induce bias. \\n\\nTheir subsequent claim that \\u201cThis can be brought to identity (or at least diagonal) by decorrelated activities and decorrelated activity differences,\\u201d is also inaccurate. As previously mentioned, assuming that perturbation vectors $v_1, v_2$ are white noise with amplitude $\\\\sigma_v^2$, we have $\\\\left \\\\langle (\\\\tilde{y}_2 - y_2) (\\\\tilde{y}_2 - y_2)^T \\\\right \\\\rangle = \\\\sigma_v^2 \\\\left( I + W_2 \\\\text{diag} [f' (W_1 x)^2] W_2^T \\\\right) + \\\\mathcal{O} (\\\\sigma_v^4)$. Thus, $\\\\left \\\\langle (\\\\tilde{y}_2 - y_2) (\\\\tilde{y}_2 - y_2)^T \\\\right \\\\rangle$ is not diagonal in general. \\n \\nA secondary concern relates to the comparison with BP. As noted in my previous comments, a key limitation of NP is that it requires a small learning rate for convergence due to high variance (Werfel et al., NeurIPS 2003). Comparing NP with BP at a fixed small learning rate may give the misleading impression that the proposed methods (INP/ANP) are competitive with BP. In practice, BP is expected to achieve much faster convergence by using a larger learning rate. Therefore, the statement in the abstract claiming \\\"large improvements in parameter convergence and much higher performance on the test data, approaching that of BP\\\" is misleading.\\n\\nWhile the work offers some interesting contributions\\u2014such as increased biological plausibility by eliminating the need for neurons to track the source of noise and the idea of input decorrelation to improve NP\\u2014I believe the current manuscript lacks sufficient scientific rigor.\"}", "{\"summary\": \"Backpropagation (BP) is the standard for training deep neural networks but is criticized for its lack of biological plausibility and computational complexity due to separate forward and backward phases. Node Perturbation (NP) offers an alternative approach by injecting noise into hidden layer activities and using the resulting change in the loss function to guide weight updates. However, traditional NP is inefficient, unstable, and requires precise noise control, limiting its practical utility and biological relevance.\\n\\nThis study extends NP by introducing more robust formulations. It reframes NP using the concept of directional derivatives, leading to an iterative approach (INP) that better aligns with BP in terms of gradient estimation. Additionally, the paper presents an activity-based variant (ANP) that estimates the gradient using differences between clean and noisy activations, thus bypassing the need for precise noise measurement. A key contribution is integrating a layer-wise input decorrelation mechanism, which mitigates the bias in NP updates and accelerates convergence.\\n\\nNumerical experiments demonstrate that these modified NP algorithms, particularly when combined with input decorrelation, significantly enhance performance compared to standard NP and, in some cases, approach BP-level accuracy. The study also shows that these methods can be extended to noisy systems where the noise process is not directly observable, making them applicable to both neuromorphic computing and potential models of biological learning.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"**Biological Motivation**: The exploration of node perturbation (NP) as an alternative to backpropagation is compelling due to its alignment with plausible biological mechanisms, negating the need for backward passes and allowing learning from reward signals. Previous work on NP suffers from poor performance and reliance on specific and accurate noise control. This work improved on previous studies by offering significant improvements that could potentially make NP a competitive framework.\", \"**Innovative Formulations**: The introduction of iterative node perturbation (INP) and activity-based node perturbation (ANP) adds theoretical depth, notably linking perturbation approaches with directional derivatives and improving the stability of NP in noisy environments. In particular, the authors show that the loss gradient can be computed without precise control over the noise process. This solution is elegant and informative.\", \"**Decorrelation Mechanism**: The incorporation of input decorrelation as an unsupervised learning mechanism demonstrates clear improvements in convergence speed, adding practical value to NP and its variants, while maintaining biological plausibility.\"], \"weaknesses\": \"1. **Scalability Limitations**: While the paper suggests that scaling to larger problems could be addressed through parallelization, this solution conflicts with the biological motivation emphasized throughout the text. The authors should reconcile this discrepancy by exploring biologically feasible alternatives or clarifying the practical biological implications. Notably, this approach is clearly relevant for neuromorphic computing, particularly since the noise perturbations in this framework can be arbitrarily small.\\n \\n2. **Gradient Approximation**: Theoretical analyses (e.g., Equation 4) focus on mean gradients. Still, the role of noise variance and the number of noisy samples in the stability and efficiency of gradient estimates are underexplored. Since the authors emphasize the framework's efficiency, claiming high performance can be achieved with a small number of noisy passes, the typical, rather than the mean loss gradient, should be analyzed. \\n\\n3. **Decorrelation Analysis**: Adding an unsupervised learning rule for input decorrelation in each layer is an intriguing and potentially beneficial approach. However, the paper\\u2019s analysis of this aspect is insufficient, both numerically and theoretically. The improvement observed from decorrelating inputs, as demonstrated in Figure 3, is unsurprising. In a single-layer architecture, this step functions similarly to an additional linear transformation or data preprocessing step, which is expected to yield performance gains. This effect diminishes the novelty of the finding.\\n Furthermore, while Figure 4 shows notable improvements in BP training accuracy with decorrelation, the minimal test accuracy gains suggest overfitting, indicating that the method primarily accelerates convergence without enhancing generalization. This point needs further exploration to determine the trade-offs between train and test performance. Moreover, Figure 4 highlights that input decorrelation does not enhance\\u2014and may even degrade\\u2014performance in convolutional networks. This discrepancy calls for a more thorough investigation into the conditions under which decorrelation aids or hinders performance. The authors should address these limitations and clarify whether decorrelation consistently benefits deeper and more complex architectures, or if its effectiveness is limited to simpler cases.\\n \\n4. **Learning in the deep hidden layers**: Figure 2 indicates that the gradient alignment in the output weights closely matches that of BP, which may be due to the low-dimensional nature of the output space. This raises a concern that the observed performance, which falls short of BP\\u2019s, could be primarily driven by the readout weights, potentially bolstered by the unsupervised decorrelation step applied to the layer activities. This implies that the NP algorithms may contribute minimally to learning in the deeper layers. \\n To address this issue, the authors should provide evidence that ANP/INP enhances learning throughout the network, rather than merely acting as a support for the final readout layer. Specifically, they should demonstrate that these algorithms outperform a simpler baseline approach involving unsupervised learning in the hidden layers followed by SGD or NP for training the output layer.\\n### Minor Comments\\n- **Appendix Clarifications**: The derivations in Appendix C do not add significant new insights beyond the main text and should be expanded to include formal proofs that strengthen the theoretical claims made in the main manuscript.\\n- **Clarification of Sample Averaging**: The use of a noise direction vector $v$ for each sample should be clearly articulated to explain how this averaging over noise directions ensures accurate gradient approximation.\", \"questions\": \"1. **Variance and Convergence of the Loss Gradient**: Could the authors provide an analysis of the variance in the estimated loss gradient as a function of the number of samples? Additionally, what is the typical convergence rate of the loss with an increasing number of samples? This data would offer valuable insights into the scalability and efficiency of the proposed methods.\\n\\n2. **Impact of the Decorrelation Step**: Can the authors confirm that the observed performance improvements are not solely driven by the decorrelation rule? From Figure 4, it is unclear whether INP and ANP contribute significantly without the decorrelation step. What results would be obtained if only the decorrelation step was implemented, followed by NP applied solely to train the readouts?\\n\\nWhile I have additional questions, addressing these principal concerns would be pivotal in reconsidering the rating of this work.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
4qRCiEZGKd
Neural Description Logic Reasoning over Incomplete Knowledge Bases
[ "Louis Mozart KAMDEM TEYOU", "Luke Friedrichs", "N'Dah Jean Kouagou", "Caglar Demir", "Yasir Mahmood", "Stefan Heindorf", "Axel-Cyrille Ngonga Ngomo" ]
Concept learning exploits background knowledge in the form of description logic axioms to learn explainable classification models from knowledge bases. Despite recent breakthroughs in the runtime of concept learners, most approaches still cannot be deployed on real-world knowledge bases. This is due to their use of description logic reasoners, which do not scale to large datasets. Moreover, these reasoners are not robust against inconsistencies and erroneous data, both being hallmarks of real datasets. We address this challenge by presenting a novel neural reasoner dubbed \approach. Our reasoner relies on embeddings to rapidly approximate the results of a symbolic reasoner. We show that our reasoner solely requires retrieving instances for atomic concepts and existential restrictions to retrieve the instances of any concept in $\mathcal{SROIQ}$. Importantly, our experiments also suggest that our reasoner is robust against missing and erroneous data.
[ "concept learning", "description logic", "knowledge bases", "neural reasoner", "embeddings", "SROIQ", "atomic concepts" ]
Reject
https://openreview.net/pdf?id=4qRCiEZGKd
https://openreview.net/forum?id=4qRCiEZGKd
ICLR.cc/2025/Conference
2025
{ "note_id": [ "rmyt2Tr2mV", "orAGzJL0Vm", "lor2fjE4GL", "kCVc0IwnYA", "WH6uqfJYRI", "J7ZMqDAYGk", "7hWNjtSSEa" ], "note_type": [ "official_review", "official_review", "official_review", "meta_review", "decision", "official_review", "official_review" ], "note_created": [ 1729600598855, 1729941807036, 1730724551053, 1734723365128, 1737524016607, 1731162715381, 1730802156856 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission9958/Reviewer_jXrQ" ], [ "ICLR.cc/2025/Conference/Submission9958/Reviewer_UN7H" ], [ "ICLR.cc/2025/Conference/Submission9958/Reviewer_JEta" ], [ "ICLR.cc/2025/Conference/Submission9958/Area_Chair_yyHA" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission9958/Reviewer_S1Dk" ], [ "ICLR.cc/2025/Conference/Submission9958/Reviewer_kAv9" ] ], "structured_content_str": [ "{\"summary\": \"This paper proposes a novel neural reasoner, dubbed EBR, reasoning over incomplete and inconsistent knowledge graphs. Authors propose a neural interpretation for the SROIQ semantics for Descriptive Logic. A substantial survey is conducted. Experiments are carried out in six datasets, with very good results -- achieving near-perfect results in the close world scenario.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"This paper addresses an important issue of knowledge graph reasoning, and follows the embedding approach to deal with the incompleteness and inconsistency of knowledge graphs. The paper is very well polished, from writing to experiments.\", \"weaknesses\": \"It is not clear what neural architectures are used. The datasets used in the experiments are not those used by SOTA papers. After inspecting the supplementary material, I see codes, but do not find datasets. Authors described in section 2.3 that CQD and other neural logical query methods do not support negation, universal restriction and cardinality restriction. However, Beta-E supports:\\n \\nBeta Embeddings for Multi-Hop Logical Reasoning in Knowledge Graphs. H. Ren, J. Leskovec. Neural Information Processing Systems (NeurIPS), 2020.\\n\\nAnd CQD defines the complementary t-conorm as \\\\bottom (x,y) = 1 - \\\\top(1-x, 1-y). This automatically follows ways of definitions of negation. If \\\\top(x,y) = min(x,y), then, -x = -\\\\top(x,x) = -\\\\top(1- (1-x), 1- (1-x)) = \\\\bottom(1-x, 1-x) -1. \\n\\nIn Table 2, the neural semantics of \\\\Delta^\\\\mathcal{I} and \\\\emptyset are the same as the semantics of those in Table 1.\", \"questions\": \"1. are the datasets publicly available?\\n\\n2. line 210: \\\"mapping of DL syntax to a neural semantic syntax\\\". You are mapping \\\" .. syntax to .. syntax\\\", or \\\".. syntax to a neural semantics\\\"? \\n\\n3. what the neural architectures of the proposed method?\\n\\n4. In section 3, what is the novelty in methodology?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper claims to present a neural reasoner that captures the full semantics of the description logic SROIQ while scaling to large knoweldge bases. The paper discusses related work, presents the reasoning approach and evaluates the correctness of the model in three different closed world scenarios.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"Combining symbolic reasoning with neural methods is a very promising approach to mitigate the known problems of the two approaches, containing the ones mentioned in the paper.\", \"weaknesses\": \"The approach proposed is quite naive in my opinion: use an existing link prediction approach to answer triple queries and combine the results according to the semantics of description logics operators. This approach has a number of limitations and does not really preserve the logical semantics of SROIQ:\\n\\n- the operators defined in the paper seem to differ from the one's I would expect in SROIQ. for instance I missed how complex role inclusions are handled. Further, I was surprised to see the 'self' operator in the definition, also this typically makes expressive description logics undecidable \\n- The reasoning seems to rely on a the closed-world assumption (the evalution uses a closed word setting), which is not the semantics of SROIQ that uses open World semantics and features real negation as well as ways to implicitly formulate negation. \\n\\nThe evaluation - as mentioned above - is not really suited to show that the approach preserves SROIQ semantics. The setting is much closer to complex querying over a database than logical reasoning. Given this, I miss a comparison with more database-like approach to neural symbolic reasoning, e.g. based on datalog queries.\", \"questions\": [\"How are complex role assertions handled by your reasoner?\", \"How can you deal with the open world semantics underlying SROIQ?\", \"What other approach to combining complex database queries with neural approach exist and how does your approach perform in comparison to these?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes a novel embedded reasoning model called Embedding Based Reasoner (EBR), aimed at addressing the issues of incompleteness and inconsistency in the Knowledge Base (KB). The traditional symbolic inference engine is inefficient and not robust enough when dealing with large-scale or erroneous data KB. In this paper, the neural inference engine EBR overcomes these shortcomings by quickly approximating the inference results of the symbolic inference engine through embedding technology.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The paper is well organized.\", \"Experiments carried out by authors are sufficient.\"], \"weaknesses\": [\"The theoretical explanation of the method is limited: EBR uses embedded reasoning techniques, but there is insufficient detailed explanation of its theoretical basis and working principle. Lack of in-depth analysis of the consistency and interpretability of embedded models in DL semantics may affect trust in the robustness and reliability of the method.\", \"Although EBR has significantly improved efficiency on large-scale datasets, there is a lack of detailed quantitative analysis of its computational resource requirements, such as memory consumption and GPU computing resources.\"], \"questions\": [\"Is there a critical point or noise level that significantly reduces the performance of EBR? Can you provide some applicability conditions?\", \"How does EBR ensure consistency between embedded representations and symbolic inference logic?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This paper presents Embedding-Based Reasoner (EBR), a neural description logic reasoner designed to handle large-scale, incomplete, and inconsistent knowledge bases. EBR approximates logical reasoning under the SHOIQ syntax by using existing neural embeddings for the KB, aiming to provide a scalable and robust solution for handling noisy data. Although the problem formulation of this work is both interesting and important, and the paper is well-written; however, it also has some flaws and gaps. The theoretical analysis in the paper is limited, and the experimental section also struggles to provide convincing results.\\nSpecifically, the paper does not provide a comparison with state-of-the-art (SOTA) experimental results. Thus, I recommend rejecting this paper.\", \"additional_comments_on_reviewer_discussion\": \"The author of the paper did not provide a rebuttal.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"This is a very well-written paper on an interesting problem. The paper is mostly sound. But it downplays the capabilities of related work and overpromises on what it accomplishes. Furthermore, the originality of the proposed approach is minimal - which might perhaps be ok if there were a comprehensive evaluation, which, however, is not part of the paper.\", \"soundness\": \"2\", \"presentation\": \"4\", \"contribution\": \"1\", \"strengths\": \"1 The targeted problem is interesting\\n2 The proposed approach can be easily reproduced\", \"weaknesses\": \"1 The paper overpromises\\n2 Suggested method is a trivial extension of existing methods\\n3 Experimental comparison with related work is weak\\n4 Description of the experimental setup is weak\\n\\n\\n*1 The paper overpromises*\\nIt says \\\"We propose neural semantics to tackle the instance retrieval problem on incomplete or inconsistent SROIQ KBs\\\".\\nWhat the paper actually does is that it (step A) computes a backbone based on a very limited set of axioms only containing instance assertions, role assertions, and subsumption axioms of the explicit form (C rdfs:subclassOf D). This is a tiny subset of SROIQ axioms.\\nBased on this subset, it allows for (step B) the querying of SROIQ concepts from a very limited set of queries, i.e., the ones listed in Table 3, but no recursive definition of concept expressions was applied, again underutilizing the capabilities of SROIQ (at least I could not read this from the paper).\\n\\n*2 Suggested method is a trivial extension of existing methods*\\nThe two steps (step A) and (step B) could have been trivially done by a range of Complex Query Answering methods. \\n\\n*3 Experimental comparison with related work is weak*\\nThe proposed approach is an approximation. The only comparisons are made against sound (and complete) semantic reasoners. Other trivially available approximations are not considered. As mentioned above, complex query-answering methods will be available. Even if few constructs would not be available in a particular answering method, others would be and could be compared.\\nSimilarly, maybe worse, the statement that description logic embeddings do not support instance retrieval is wrong. Already, the computation of the backbone in these methods could have been more powerful than (step A) suggested here, and a comparison to their approximation would be easily possible. Note, for example, that a union query would be trivially available for box or ball embeddings by exactly returning the disjunction of the two elements. Note that this is a *trivial* modification and does not change these suggested methods, since no complex composition of concept expressions is required.\\n\\n*4 Description of the experimental setup is weak*\\nThe procedure for constructing wrong axioms is unspecified.\\nThe procedure for having queries remains vague (are retrievals of composed concept expressions part of the queries?)\\nIt is unclear to what extent the benchmark datasets do or do not exploit the expressiveness of SROIQ.\", \"questions\": \"Please clarify issues I mentioned under weaknesses, esp. ones related to *4 Description of the experimental setup is weak*\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"None\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper presents Embedding-Based Reasoner (EBR), a neural description logic reasoner designed to handle large-scale, incomplete, and inconsistent knowledge bases. EBR approximates logical reasoning under the $\\\\mathcal{SHOIQ}$ syntax by using existing neural embeddings for the KB, aiming to provide a scalable and robust solution for handling noisy data. The experiment results demonstrate superior instance retrieval performance of EBR over conventional symbolic reasoners including HermiT, Pellet, JFact, and Openllet, across several benchmark datasets.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"(1) This work explores the important field of neuro-symbolic reasoning, which is crucial for advancing knowledge representation and reasoning, especially for real-world applications where incomplete or noisy data is unavoidable.\\n\\n(2) The time efficiency for performing reasoning on large, noisy KBs is also important in practice. \\n\\n(3) The paper provides detailed background of description logic and SHOIQ syntax, offering clear formulations that help readers understand the context of the task and the proposed approach.\", \"weaknesses\": \"(1) First, the technical contribution is unclear. Although the paper introduces EBR as a novel neural reasoner for incomplete/inconsistent KBs, it heavily relies on existing neural embedding techniques, with limited originality beyond adopting the embeddings for DL-based reasoning. This makes it unclear what aspects of EBR are technically new.\\n\\nI understand the theoretical contribution as to introduce the mapping between DL syntax and neural semantics. However, as EBR is only applied to the task of instance retrieval, it remains unclear that, whether, and to what extent, the mappings help to improve the performance of the reasoner. Besides, no evidence (e.g., provable guarantee) was given to justify the correctness on the theoretical side, which further limits the significance of the work.\\n\\nSeveral improvements could be done to improve the paper, including (i) present explicit comparison about how EBR fundamentally differs from prior approaches; (ii) provide evidence (if any) such as theoretical guarantee to validate the contribution. \\n\\n(2) The evaluation process is unclear. Section 4 and Section 5 do not explain in detail how to conduct \\u201cinstance retrieval\\u201d in a given KB. How does the EBR reasoner work in the experiments? For example, does this process involve reasoning over graph structure? How to compute the score for each entity with each concept? \\n\\nSection 3.2 introduces the link prediction task, which is a standard task over graph data, However, the experiments only conduct instance retrieval but not link prediction. Does instance retrieval relate to link prediction? If so, this should be clarified to avoid confusion. If not, then what is the purpose to mention link prediction in Section 3.2?\\n\\nHow were the KB embeddings trained? Does the KB embedding training process form a part of EBR? All the details including input/output scheme, encoding/decoding process, message passing mechanism, loss function, etc., are essential for readers to understand the working process and the utility of EBR. (The detailed settings might be given in the appendixes, but the current version does not contain them.)\\n\\n(3) Lack of Comparisons with Neural Embedding-based Models. In the paper, EBR is only compared with traditional symbolic reasoners, while there is no comparison with recent neural-based or hybrid models that can also handle incomplete data (e.g., rule learning models including Neural-LP[1], DRUM[2], or ontology-aware neural models). \\n\\n[1] Fan Yang, Zhilin Yang, William W. Cohen. Differentiable Learning of Logical Rules for Knowledge Base Reasoning. NeurIPS 2017\\n\\n[2] Ali Sadeghian, Mohammadreza Armandpour, Patrick Ding, Daisy Zhe Wang. DRUM: End-To-End Differentiable Rule Mining On Knowledge Graphs. NeurIPS 2019\\n\\n(4) Case study and detailed analysis should be presented. The current evaluation only reports the Jaccard similarity, F1 scores and running time for instance retrieval, which are all high-level statistics and provide little insights about the underlying work process and benefits of EBR. \\n\\nTo improve this, instead of simply reporting the metric scores on every datasets, I suggest the authors to include analysis of some cases extracted from any dataset. For example, by comparing the different performance of EBR and the baselines, readers could gain more insights about why EBR or any baseline makes it correct/incorrect.\\n\\n**Minor issues**\\n\\n(1) Line 101, \\u201c\\u2026iff $C^\\\\mathcal{I} \\\\sqsubseteq D^\\\\mathcal{I}$\\u2026\\u201d should be \\u201c$C^\\\\mathcal{I} \\\\subseteq D^\\\\mathcal{I}$\\u201d. \\n\\n(2) Line 241, \\u201cThe syntax and semantics for concepts in SROIQ are provided in the appendix.\\u201d---They are not in the appendix.\\n\\n(3) All the tables in the appendix need to be discussed. Leaving the tables alone without any analysis provides little information for the readers.\", \"questions\": \"(1) Section 3.2 introduces the link prediction task, which is a standard task over graph data, but how does it relate to instance retrieval that is conducted in the experiments? Also, it seems no (standard) link prediction experiment was conducted. If so, what is the purpose to mention link prediction in Section 3.2?\\n\\n(2) How were the KB embeddings trained? Does the KB embedding training process form a part of EBR? These details are essential for readers to understand the working process and utility of EBR. \\n\\n(3) The authors claim that EBR could scale to large datasets. But according to the dataset statistics in the appendix, the largest dataset Vicodi only has 33K instances and 116K assertions. On the other hand, real-world KBs such as Freebase, DBpedia, are typically in million or even billion scale. I wonder if there are any standard convention to conceptualize \\u201clarge-scale KBs\\\"? I am also curious about whether the proposed EBR can handle KBs at the scale such as Freebase?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
4pRwkYpa2u
Rethinking Light Decoder-based Solvers for Vehicle Routing Problems
[ "Ziwei Huang", "Jianan Zhou", "Zhiguang Cao", "Yixin XU" ]
Light decoder-based solvers have gained popularity for solving vehicle routing problems (VRPs) due to their efficiency and ease of integration with reinforcement learning algorithms. However, they often struggle with generalization to larger problem instances or different VRP variants. This paper revisits light decoder-based approaches, analyzing the implications of their reliance on static embeddings and the inherent challenges that arise. Specifically, we demonstrate that in the light decoder paradigm, the encoder is implicitly tasked with capturing information for all potential decision scenarios during solution construction within a single set of embeddings, resulting in high information density. Furthermore, our empirical analysis reveals that the overly simplistic decoder struggles to effectively utilize this dense information, particularly as task complexity increases, which limits generalization to out-of-distribution (OOD) settings. Building on these insights, we show that enhancing the decoder capacity, with a simple addition of identity mapping and a feed-forward layer, can considerably alleviate the generalization issue. Experimentally, our method significantly enhances the OOD generalization of light decoder-based approaches on large-scale instances and complex VRP variants, narrowing the gap with the heavy decoder paradigm. Our code is available at: https://github.com/ziweileonhuang/reld-nco.
[ "Combinatorial Optimization", "Vehicle Routing Problem", "Generalization" ]
Accept (Poster)
https://openreview.net/pdf?id=4pRwkYpa2u
https://openreview.net/forum?id=4pRwkYpa2u
ICLR.cc/2025/Conference
2025
{ "note_id": [ "uvhqLL42jJ", "s0KDYg1EZm", "qAuXYImf7i", "ilXBPj4Nb6", "PtVpTK5Zks", "ITPVTfAgDM", "GwkmIIFpQ6", "4GLKx6g2Aj" ], "note_type": [ "official_review", "official_comment", "official_review", "decision", "official_review", "official_review", "meta_review", "official_review" ], "note_created": [ 1730687845705, 1732550385016, 1730037128975, 1737523530930, 1729749876467, 1730875879322, 1734625625278, 1730778353384 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission2769/Reviewer_p2Jk" ], [ "ICLR.cc/2025/Conference/Submission2769/Area_Chair_DccF" ], [ "ICLR.cc/2025/Conference/Submission2769/Reviewer_nQhH" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission2769/Reviewer_EM2N" ], [ "ICLR.cc/2025/Conference/Submission2769/Reviewer_SfnF" ], [ "ICLR.cc/2025/Conference/Submission2769/Area_Chair_DccF" ], [ "ICLR.cc/2025/Conference/Submission2769/Reviewer_NKcN" ] ], "structured_content_str": [ "{\"summary\": \"This paper studies the limitations of light decoder-based solvers for Vehicle Routing Problems (VRP). Modern RL methods for VRP typically employ light decoders for solution generation due to their efficiency. However, the authors think the light decoders may not capture the problem structure well. So they proposed Revised Light Decoder (ReLD) which modified the original light decoder and make it contain richer information.\\n\\nThe experiment results show that their framework can improve the current state-of-the-art methods.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. This paper approaches the deep learning for VRP problem from a new perspective: make the decoder contain richer information.\\n2. The modification is not complicated and the results looks good.\", \"weaknesses\": \"1. The scalability issue still exists. There are no large-scale experiments conducted.\", \"questions\": \"1. Do you have any results on the large-scale instances, e.g., CVRP-10000?\\n2. Can this method be transferred to TSP or other CO problems? Do you have any preliminary results?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Important: Please Review Rebuttals and Update Reviews as Needed\", \"comment\": \"Dear Reviewers,\\n\\nThank you for your hard work and dedication to providing thoughtful reviews for this year\\u2019s ICLR submissions. Your efforts play a vital role in maintaining the conference\\u2019s high standards and fostering meaningful discussions in the community. \\n\\nAs we are close to the end of the discussion phase, I kindly urge you to read the authors\\u2019 responses and reevaluate your reviews carefully, especially if they address your concerns. If you find that the authors have resolved the issues or clarified misunderstandings, please consider adjusting your comments and scores accordingly. This ensures fairness and gives each submission the opportunity to be judged on its merits. \\n\\nYour continued commitment is greatly appreciated\\u2014thank you for contributing to the success of ICLR!\"}", "{\"summary\": \"The authors first address the challenges of light decoder-based approaches for vehicle routing problems (VRPs). Because the traditional approach relies on static embeddings that must capture complex information within a single set of representations, it is difficult for the simplistic decoder to fully leverage this information, particularly in out-of-distribution (OOD) scenarios (such as generalizing to larger instances or different VRP variants). Enhancements to the decoder is thus introduced, such as adding an identity mapping and a feed-forward layer, to mitigate this issue. Experimental results demonstrate that this adjustment improves both in-distribution and OOD generalization performance, narrowing the gap between light and heavy decoder paradigms.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The authors effectively highlight a key issue with static embeddings: the gap between the information needed for optimal decoder performance and the information stored in the context vector. This insight is both logical and significant, emphasizing a critical area for future improvements in this field. The research in this direction holds promising implications for advancing VRP solutions.\", \"weaknesses\": \"(1) The issue of the overly complex context vector is not directly addressed. The authors\\u2019 solution is somewhat simplistic, adding only extra structures to the decoder without any modification to the encoder or main inference process. This solution still has the same 'complex context vector' problem as described in \\\"Gap between Policy Formulations\\\" subsection (LINE 195). Given this limited change, the improvement in performance, such as in CVRP100, is also marginal.\\n\\n(2) One of the paper's main claims is that their method improves out-of-distribution (OOD) performance. However, the practicality of the approach based on zero-shot generalization is unclear. Why is it necessary for a model trained on N=100 cases to perform well on N=1000 cases? There are many established methods to address OOD problems, such as fine-tuning before inference, tree search or active learning during inference. The proposed modifications should ideally be evaluated in these more practical and realistic settings rather than in zero-shot scenarios.\\n\\n(3) With additional parameters in the decoder, the training burden likely increases. The training details are largely missing, particularly regarding how the proposed modifications affect training time and resources. The authors should also compare their approach to other potential modifications that increase parameters in the decoder, such as adding an additional decoding layer, in terms of training efficiency and resource requirements, as well as the solver performance.\", \"questions\": \"These are not questions, but rather minor shortcomings related to the paper\\u2019s writing.\\n\\n(1) There are too many versions of ReLD introduced in the paper, and it is difficult to follow what each version represents.\\n\\n(2) Shouldn't Figure 1 also reflect the changes described in Section 3.2, Powerful Query? Currently, it seems to only illustrate the changes discussed in Section 3.1, Direct Influence of Context.\\n\\n(3) Table 5 contains several mis-copied numbers and incorrect placement of bold formatting.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"summary\": \"This paper discusses the limitations of the Heavy Encoder-Light Decoder based model, which is commonly used in combinatorial optimization. Specifically, it addresses the problem where the encoder must embed all possible contextual information required during the decoding process into a single embedding, resulting in excessively high information density. It also highlights how the decoder fails to utilize this information. As a solution, the authors propose ReLD. The proposed method is validated through experiments on CVRP and VRP variants, demonstrating its effectiveness. In particular, the method showed superior performance compared to existing Light Decoder-based solvers when faced with out-of-distribution (OOD) problems.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"This paper analyzes the limitations of the Light Decoder-based model and demonstrates the complexity of the encoded information and lack of generalization performance through experiments.\", \"This paper proposes a novel approach by modifying the decoder to overcome the issues inherent in Light Decoder-based models, and demonstrates the superiority of the proposed method through experiments.\", \"In the CVRP generalization experiments, it showed promising results than existing Light Decoder-based models and performed well across various VRPs.\"], \"weaknesses\": [\"There are unclear parts in the equations and content of Section 3 Methodology. It also does not match well with Figure 1(e). It is unclear whether $Q$ in Figure 1 (e) is the same as $q_c$ in Equation (11) or not. And It is difficult to understand the structure of the proposed neural network in Equation (7) to (11). For example, it is not clear how $q_c$ in Equation (11) is used. Relevant questions can be found in the Question section.\", \"In the experiments, it is unclear whether the improvement in VRPs solution accuracy is due to the addition of the feedforward network itself or simply the increase in decoder parameters caused by adding FF. It seems necessary to conduct a comparative evaluation with a decoder where the parameters are increased equivalently without FF. Furthermore, a more solid logical explanation is needed on how the FF helps overcome the limitations of the light decoder.\"], \"questions\": [\"How $q_c$ in Equation (11) is used?\", \"What are $W_S^OW_S^V$ in Equation (8)? Does this imply performing value projection followed directly by output projection? As far as I know, such an operation does not exist in the original transformer. Could you provide a more detailed explanation regarding this part?\", \"The modification presented in this paper adds an FF layer to the POMO decoder, and the decoders in models like POMO operate almost as many times as the number of nodes in an auto-regressive manner. Therefore, it is expected that the model modification proposed in this paper will increase both training and inference (optimization) times. However, in line 361 of the paper, it states that the additional step-wise running time is independent of the number of nodes and is computationally efficient. Could you please provide a more detailed explanation on this? If possible, it would be helpful to provide numbers on the changes in training and inference times due to the model modification.\", \"How effective is the Distance Heuristic($-log(dist_i)$) in Equation 12? Is there any information on how the model performs if this heuristic is removed?\", \"In Table 3, the CVRP100 gap for POMO augx8 is 1.004%, which shows a significant difference from the 0.32% gap for CVRP100 presented in the original POMO paper. Could you explain the reason for this discrepancy?\", \"In Table 3, the addition of the ff layer appears to provide little benefit for CVRP100. Why does adding the ff layer to the decoder not contribute to performance improvement for a problem size 100?\", \"As a minor comment, the font size in Tables 3 and 4 is too small.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper presents an analysis of light decoder-based solvers for VRP, specifically addressing the challenges of generalization to out-of-distribution (OOD) problem instances. By identifying limitations due to the reliance on static embeddings, the authors propose a modified approach, ReLD (Rethinking Light Decoder), which incorporates identity mapping and a feed-forward layer to enhance the decoder\\u2019s capacity. The proposed model demonstrates improved OOD performance across a variety of VRP instances, narrowing the performance gap between light and heavy decoder approaches.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The paper provides a detailed breakdown of the light decoder\\u2019s limitations in VRP, particularly the static embeddings\\u2019 burden on the encoder.\\n\\nReLD addresses an important need in VRP research generalization across problem scales. \\n\\nThe modifications retain the light decoder\\u2019s computational efficiency, which could be advantageous for applications needing faster routing solutions without the computational load of heavy decoders.\", \"weaknesses\": \"Limited Scalability for Large Instances: ReLD struggles to compete with heavy decoder architectures on very large instances, such as CVRP1000. This limitation suggests that ReLD\\u2019s current modifications might not be sufficient for all scales of VRP.\\n\\nThe proposed modifications are relatively minor adjustments. While effective, they lack substantial novelty within the machine learning field.\\n\\nWhile this paper makes a valuable contribution by revisiting the light decoder paradigm and identifying limitations in current architectures, its primary innovations are modest. The architectural modifications, though effective, are straightforward and may not sufficiently address scalability issues, particularly in very large instances or complex real-world VRP variants. Moreover, the method is not showing SOTA results on the biggest problem instances, which is assumed to be the main advantage of the method.\", \"questions\": \"How does ReLD manage complex or dynamic VRP constraints, such as real-time updates or varying demands?\\nHow could larger decoder modifications enhance OOD performance?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This paper analyzes the limitations of light decoder-based methods for solving Vehicle Routing Problems (VRPs). The authors clearly identify that static embeddings in the encoder result in dense information that the light decoder struggles to utilize, especially when generalizing to larger or more complex VRP instances. Their proposed solution, ReLD, introduces simple yet effective modifications to the decoder, specifically identity mapping and a feed-forward layer. These enhancements alleviate the burden on the encoder and improve the decoder\\u2019s capacity to handle context-specific information. This approach maintains the computational efficiency of light decoder-based methods while improving out-of-distribution (OOD) performance.\\n\\nThe strengths of the paper lie in its clear problem diagnosis, logical modifications, and thorough empirical evaluation. The authors demonstrate that ReLD significantly improves generalization performance on large-scale and complex VRP variants, narrowing the gap between light and heavy decoder paradigms. Notably, ReLD performs well on real-world datasets and scales effectively to instances as large as CVRP16K. The paper\\u2019s insights provide a valuable contribution to neural combinatorial optimization, offering a balance between performance and efficiency.\", \"additional_comments_on_reviewer_discussion\": \"The reviewer discussion was constructive and led to meaningful improvements to the paper. Reviewers raised concerns about scalability, clarity in the methodology, and the novelty of the modifications. The authors addressed these points by providing additional experiments on large-scale instances, clarifying key equations, and demonstrating the effectiveness of their decoder enhancements compared to other parameter-increasing strategies.\"}", "{\"summary\": \"The paper revisits light decoder-based solvers for VRPs, recognized for their efficiency but limited in generalization to larger or varied problem instances. The authors attribute this limitation to the handling of static embeddings, which creates high information density in the encoder, overwhelming the simplistic decoder. To overcome these challenges, they propose an enhanced decoder structure, incorporating identity mapping and feed-forward layers to effectively boost the decoder\\u2019s capacity and improve generalization performance. The authors perform experiments to demonstrate that ReLD achieves better generalization performance on both in-distribution and out-of-distribution tasks, closing the performance gap with heavier decoders while maintaining computational efficiency.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The paper is well-written and easy to follow\\n2. A thorough empirical analysis is conducted to validate the potential limitation of current light decoder-based solvers\\n3. A simple but effective modification is performed to improve the decoder part of the current decoder-based solvers\\n4. Experiments are conducted on many datasets covering different distributions, problem sizes, and problem classes.\", \"weaknesses\": \"The analysis in this paper is largely 'end-to-end,' with a strong reliance on empirical results presented towards the conclusion. Several concerns regarding the encoder and decoder architectures are raised, and it may be beneficial to adopt a more direct investigative approach into these components.\\n\\nAdditionally, the limitations of the current light decoder-based model are inferred from empirical experiments conducted solely on CVRP problems, focusing on LEHD and POMO models. A broader analysis including more models with both light and heavy decoder architectures would provide a more comprehensive foundation for the conclusions drawn.\\n\\nThe insights presented offer valuable guidance on improving the decoder architecture to address limitations associated with overly simplified decoders. This work implements a minor modification in this direction; however, the extent to which further increases in model complexity would yield additional performance gains remains uncertain. This issue points to a trade-off between model performance and efficiency, though an optimal balance between the two has yet to be determined.\\n\\nOverall, I appreciate the discussions and architectural considerations raised by this paper concerning VRP model design. Currently, I lean toward borderline acceptance.\", \"questions\": \"Besides the concerns raised in the weakness part, I have the following additional questions:\\n\\n1. In Table 5, please check whether all the numbers you are reporting are correctly documented. The bolded number in OVRPLTW and the bolded number in OVRPBLTW are either too big or too small and do not correspond to the reported gaps.\\n2. the figure 1 could be further improved. E.g. the font size and the caption.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
4oj7tYujwP
ERiC-UP$^3$ Benchmark: E-Commerce Risk Intelligence Classifier for Detecting Infringements Based on Utility Patent and Product Pairs
[ "Zhuo Li", "Yuhao Du", "Ruifei Zhang", "Jiaheng Jian", "Zhanfeng Chen", "Peng Zhou", "Haifan Gong", "Xuanye Zhang", "Lianghui Chen", "Jia-Dong Zhang", "Zhiyuan Liu", "Xiang Wan", "Haofeng Li", "Anningzhe Gao", "John Sun" ]
Innovation is a key driver of economic and social progress, with Intellectual Property (IP) protection through patents playing a crucial role in safeguarding new creations. For businesses actively producing goods, detecting potential patent infringement is vital to avoid costly litigation and operational disruptions. However, the significant domain gap between products and patents—coupled with the vast scale of existing patent databases—makes infringement detection a complex and challenging task. Besides, the machine learning (ML) community has not widely addressed this problem, partly due to the lack of comprehensive datasets tailored for this task. In this paper, we firstly formulate a new task: detecting potentially infringing patents for a given product represented by multi-modal data, including images and textual descriptions. This task requires a deep understanding of both technical and legal contexts, extending beyond simple text or image matching to assess functional similarities that may not be immediately apparent. To promote research in this challenging area, we further introduce the ERiC-UP$^3$ ($\textbf{E}$-commerce $\textbf{R}$isk $\textbf{i}$ntelligence $\textbf{C}$lassifier on $\textbf{U}$tility $\textbf{P}$atent $\textbf{P}$roduct $\textbf{P}$air) benchmark, a large-scale, well-structured dataset comprising over 13-million patent samples and 1 million product samples. It includes 11,000 meticulously annotated infringement pairs for training and 2,000 for testing, all rigorously reviewed by patent experts to ensure high-quality annotations. The dataset reflects real-world scenarios with its multi-modal nature and the necessity for deep functional understanding, offering unique characteristics that set it apart from existing resources. As a case study, we provide results from a series of baseline methods and propose a simple yet effective infringement detection pipeline. We also explore additional approaches that may enhance detection performance, such as text style rewriting, cross-modal matching effectiveness, and image domain alignment. Overall, the ERiC-UP$^3$ benchmark is the first strictly annotated product-patent infringement detection dataset and stands as the largest multi-modal patent dataset, as well as one of the largest multi-modal product datasets available. We aim to advance research extending language and multi-modal models to diverse and dynamic real-world data distributions, fostering innovation and practical solutions in IP infringement detection.
[ "Benchmark; Product-Patent Infringement Detection; Large-scale Multi-Modality Dataset; Contrastive Learning; Retrieval; Domain Gap" ]
Reject
https://openreview.net/pdf?id=4oj7tYujwP
https://openreview.net/forum?id=4oj7tYujwP
ICLR.cc/2025/Conference
2025
{ "note_id": [ "btTe8tLz2R", "aXaupyIjAg", "YeT3keGAV3", "9xGf9cYK3Z", "3bWJovGSnx" ], "note_type": [ "official_review", "official_review", "decision", "official_review", "meta_review" ], "note_created": [ 1730948727148, 1730394631946, 1737523541371, 1730715485575, 1734557701934 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission2924/Reviewer_Jxy9" ], [ "ICLR.cc/2025/Conference/Submission2924/Reviewer_Pmou" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission2924/Reviewer_yHKi" ], [ "ICLR.cc/2025/Conference/Submission2924/Area_Chair_gAo1" ] ], "structured_content_str": [ "{\"summary\": \"This paper proposes a benchmark dataset for E-commerce intelligence for machine learning field. This research narrows the gap between the E-commerce area with current artificial intelligence research. In addition, this draft further provides analysis for the proposed benchmark and gives several baseline methods for reference to following research works.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. This work narrows the gap between e-commerce and machine learning research, which is a valuable try and has potential to further enlarge the impact of machine learing.\\n2. In addition to the proposed benchmark dataset, it also provides detailed statistical analysis with several backbone experiments for reference.\\n3. Overall, the writing is easy to follow.\", \"weaknesses\": \"1. Some parts of the draft are not well-prepared, such as tab.7 and 8. The overall format needs a careful polish.\\n2. Even if the proposed benchmark is for multi-modal learning, especially for vision-language interaction, I still think this topic fits better for data mining or multi-media conferences, especially considering it is a dataset-oriented paper.\\n3. Captions of figures and tables are necessary to be enriched. At least, they need to indicate the conclusion of the tables and figures. Overall, they need to be more informative.\", \"questions\": \"Please check the weaknesses section above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper presents ERiC-UP$^3$, a dataset with annotations to detect infringement behaviors between a given Amazon product and existing patents. This dataset includes 1 million product samples, 13 million patent samples, and 11,000 meticulously annotated infringement pairs for training and 2,000 for testing. This work benchmarks existing baselines and proposes a two-stage pipeline for effectively conducting infringement detection. This paper also provides some best practices to improve detection.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. A new dataset for detecting infringements in the patent domain (the unique aspect lies in the annotation).\\n2. A proposed pipeline to surpass existing methods.\\n3. Some useful takeaways to improve detection.\", \"weaknesses\": \"1. The dataset offers some novelty but is largely a domain adaptation from existing datasets like [1] and [2]. Its main advantage lies in expert annotations on infringement cases. However, the dataset\\u2019s scale is limited, with relatively few annotations, and the patent and product samples were scraped from the internet. Additionally, the distinction between \\\"base\\\" and \\\"large\\\" versions is minimal.\\n\\n2. The writing lacks clarity, making it hard to grasp key points at first glance: (1) Is infringement treated as a ranking problem? (2) What constitutes the \\\"domain gap\\\"? Is it simply a stylistic shift? (3) Why were these particular classes selected?\\n\\n3. The technical pipeline appears ad hoc. Why use a two-stage approach instead of a streamlined, end-to-end model? Why can't existing models address this problem effectively? Why wasn\\u2019t the current infringement detection pipeline integrated into the study?\\n\\n4. Key baselines are missing from this study: (1) multimodal baselines, such as LLaVA, and (2) baselines from prior infringement detection research.\\n\\n5. Important ablations on the pipeline components are absent. For instance, how does removing expert labels affect training? What are the results if detections are run without training labels?\\n\\n6. The analysis part is shallow, with findings that are largely known within the field.\\n\\n7. The literature review lacks the recent works.\\n\\n8. Some obvious typos in number and upper/lower case.\\n\\n[1] A Dataset and Benchmark for Copyright Infringement Unlearning from Text-to-Image Diffusion Models\\n[2] TMID: A Comprehensive Real-world Dataset for Trademark Infringement Detection in E-Commerce\", \"questions\": \"see wearkness. I also has a question about the significant of this work: 1) Can Google Patents (https://patents.google.com/) be used for detect infringement? 2) Is Amazon conduct infringement screening before releasing the product? If they do so, i think that only very limited samples in the ERiC-UP$^3$ involves infringement, and training model with ERiC-UP$^3$ cannot significant detect real-world product.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"None\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"The article proposes a new task of detecting potential infringing patents for a given product, and introduces a large-scale MultiModal Machine Learning dataset called ERiC-UP3, aimed at promoting research in this field. The dataset contains over 13 million patent samples and 1 million product samples, providing real-world scenarios needed for deep functional understanding to promote innovative and practical solutions for intellectual property infringement detection. It also provides some evaluation baselines and testing methods. In essence, it has the following setting:\", \"search_task_set\": \"Retrieve the patent q that product p is most likely to infringe and give the probability ranking of infringing patents in the patent list\", \"task_objective\": \"Ensure that patents with the most similar functions and potential infringement are ranked highest in the list sorting\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"Marking products and infringing data is a very heavy workload, this work has done some valuable efforts to annotate the data.\\n\\nThe text data of the product and the patent have been rewritten, which can avoid the potential meaning difference.\", \"weaknesses\": \"The writing is rather confused.\\n\\nThis article mainly discloses the patent product infringement data of MultiModal Machine Learning, but the main text mainly discusses the single modality of text, and there is little use and verification of image modality.\\nFor datasets, some graphic and table information is invalid. There is a lot of redundant information in the paired graphic and text dataset. This article rarely mentions and verifies how to ensure that this data is effective for training, and rarely uses this data for experiments and verification of graphic and text information for infringement conflicts.\\nThe expert evaluation mentioned in the article mainly evaluates whether there is infringement between the product and the patent, rather than evaluating the validity of the data\\n\\nHowever, from the perspective of CS, it is not clear whether these MultiModal Machine Learning data are effective and what the purpose of using these data is.\", \"i_have_several_questions_regarding_this_work\": \"1. The overall framework of the paper is quite chaotic, and the research framework is not clear.\\n\\n2. The experiment is comprehensive, but many tables have unclear meanings and are chaotic, a bit like an experimental report\\n - The Table 7 compares which method, I can't tell, and it's not specifically written in the article, just said the score is high.\\n - In Table 8, Using LLM to rewrite the text data of the product and the patent can effectively avoid the difference in emphasis between the two. However, it is hasty and inaccurate to determine that 0.5b qwen is the best for only three categories. Llama3-8b also has multiple high scores. Why not consider llama3-8b?\\n\\n\\n3. In Figure 6, the significance of calculating the recall rate of the top 500 is not great, and the average value of each CPC category is not given, and it cannot be seen that this mAR@500 is a good evaluation index, and the number of samples of each CPC classification is very different, the variance is very large, why not use top 10% or top 1% as the evaluation index, as shown in the figure below is the order of magnitude of 10 ^ 6. \\n\\n4. The experimental framework of MultiModal Machine Learning fusion retrieval is not clear\\n For example, how to evaluate after image retrieval mAR@500 scores are not given\\n Why is it first evaluated through text matching to the relevant patent pool, and then evaluated through image retrieval, rather than directly conducting image retrieval (missing this experiment)?\\n\\n This article mainly conducts experiments on text modality, with little emphasis on the role of MultiModal Machine Learning data, and does not reflect the significance of MultiModal Machine Learning data for infringement retrieval.\\n\\nRegarding the experiment of MultiModal Machine Learning in this article, the following questions are raised:\\n\\n- The MultiModal Machine Learning experiment in the main text of this article is just a simple stitching, text classification + image retrieval. Where is the specific integration of MultiModal Machine Learning reflected?\\n- The experiments are all here and there, and the overall performance of MultiModal Machine Learning cannot be seen\\n- Table 9 shows the image retrieval results after text classification. Which method is used for the first step of text classification? Or is it directly given classification for image retrieval to eliminate errors caused by text classification? If not, how to eliminate errors? Why not directly retrieve images? The explanation is not comprehensive enough.\\n\\n\\n5. The experimental data is incorrect. The experimental table of MultiModal Machine Learning in the above text is different from the data given in the last supplementary material.\\n - The experimental data of the plain text of table5 and table16 are inconsistent, and there is no other data of 71.43.\\n \\n6. Many of the experiments in the supplementary materials are not mentioned in the main text, and there is no clear definition of how the methods are done, how to conduct mixed experiments, or how to conduct mixed voting, and how to evaluate them\\n - According to Table 16, the article only mentions simple concatenation, but does not explain how the following two fusion are done, and the description is quite confusing. Is the voting experiment at the end just a simple union of the results of the baselines of the original two modes? Or are there other voting operations?\", \"questions\": \"as above\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This paper studies the problem of detecting potential infringing patent for a given product. To achieve it, the authors first introduce a large-scale multimodal dataset called ERiC-UP3, then a two-stage infringement detection solution is proposed.\\n\\nThe reviewers all acknowledge the importance of the newly proposed dataset, and recognize the heavy workload of the annotation. Some reviewers also appreciate the proposed pipeline and recognize the success of suppress other baselines in the evaluation. While a number of concerns and questions are raised. 1. The paper structure and writing are unclear and not well organized, particularly the figures are not well explained pointed by the reviewers. 2. The experiments focus mostly on text, lacking proper validation of the image modality of the multimodal data. 3. The dataset has some novelty with expert annotations, but is limited in scale. Hence, I conclude that the paper could not be accepted in its current form and would require a major revision.\", \"additional_comments_on_reviewer_discussion\": \"No rebuttal provided.\"}" ] }
4oQHCmnM8R
A Theory of Multi-Agent Generative Flow Networks
[ "Yinchuan Li", "Haozhi Wang", "Leo Maxime Brunswic", "Shuang Luo", "Jianye HAO" ]
Generative flow networks utilize a flow-matching loss to learn a stochastic policy for generating objects from a sequence of actions, such that the probability of generating a pattern can be proportional to the corresponding given reward. However, a theoretical framework for multi-agent generative flow networks (MA-GFlowNets) has not yet been proposed. In this paper, we propose the theory framework of MA-GFlowNets, which can be applied to multiple agents to generate objects collaboratively through a series of joint actions. We further propose four algorithms: a centralized flow network for centralized training of MA-GFlowNets, an independent flow network for decentralized execution, a joint flow network for achieving centralized training with decentralized execution, and its updated conditional version. Joint Flow training is based on a local-global principle allowing to train a collection of (local) GFN as a unique (global) GFN. This principle provides a loss of reasonable complexity and allows to leverage usual results on GFN to provide theoretical guarantees that the independent policies generate samples with probability proportional to the reward function. Experimental results demonstrate the superiority of the proposed framework compared to reinforcement learning and MCMC-based methods.
[ "Generative Model" ]
Reject
https://openreview.net/pdf?id=4oQHCmnM8R
https://openreview.net/forum?id=4oQHCmnM8R
ICLR.cc/2025/Conference
2025
{ "note_id": [ "q8SZMSa8m3", "oZAO6DQYMa", "oMtXRJSgZv", "kIBYRlFEvU", "dapb6OovI4", "bUVP2VLes1", "bDp2fbdjll", "a8ZNHZIOyN", "VdVy7FupOH", "RUqV733qUc", "PRpkkO61Jv", "KFEFXxBQHP" ], "note_type": [ "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "meta_review", "official_comment", "official_review", "official_review" ], "note_created": [ 1730713142705, 1732598862004, 1732760267096, 1731442141522, 1732759221248, 1732528364245, 1731441906391, 1737523973459, 1734764117508, 1731443809075, 1730663901824, 1730467774474 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission9282/Reviewer_CN8s" ], [ "ICLR.cc/2025/Conference/Submission9282/Authors" ], [ "ICLR.cc/2025/Conference/Submission9282/Authors" ], [ "ICLR.cc/2025/Conference/Submission9282/Authors" ], [ "ICLR.cc/2025/Conference/Submission9282/Authors" ], [ "ICLR.cc/2025/Conference/Submission9282/Reviewer_ZXFS" ], [ "ICLR.cc/2025/Conference/Submission9282/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission9282/Area_Chair_uXe1" ], [ "ICLR.cc/2025/Conference/Submission9282/Authors" ], [ "ICLR.cc/2025/Conference/Submission9282/Reviewer_ZXFS" ], [ "ICLR.cc/2025/Conference/Submission9282/Reviewer_eWbR" ] ], "structured_content_str": [ "{\"summary\": \"This paper studies the theory of multi-agent generative flow networks for co-operative tasks. The paper proposes four learning patterns: the Centralized Flow Network (CFN), Independent Flow Network (IFN), Joint Flow Network (JFN), and Conditioned Joint Flow Network (CJFN) algorithms. The paper also does experiments on the toy hyper-grid environment and one StarCraft game.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"Originality: The paper is one of the first to study the extension of Gflownets to multi-agent settings.\\nQuality\\uff1aThe paper proposes four types generative algorithms, and discuss the difference of these algorithms in terms of the training complexity and the performance.\", \"significance\": \"Experiments validates the proposed method outperforms MAPPO, MCMC in terms of modes found and L1 error.\", \"weaknesses\": \"1.For the clarity, I would suggest that the authors choose the original Gflownet formulations. The FM formulations in this paper and the original FM paper are quite different, which is quite hard to follow the main idea of this paper.\\n2.What's the main challenge that extend the Gflownet to multi-agent settings? For now, there seems no technical difficulty for multi-agent Gflownets. \\n3.The paper only studies the flow matching objective? does the proposed method applies to other Gflownet learning objectives, such as the detailed balance and the trajectory balance loss?\\n4.For the experiments, which algorithm is the best? In the common sense, CFN achieves the best performance. Also, the L1 error of all algorithms are quite high, i.e., these algorithms can not sample the reward distribution in practice. Why does the paper only present the result of JFN on the StarCraft 3m map?\", \"questions\": \"See the weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer CN8s (Part 2)\", \"comment\": \"**Response to Comment 4:**\\n*(1)* In some relatively small experiments, i.e., two agents in HyperGrids, CFN works best.\\nHowever, the performance of the CJFN algorithm is almost very close to that of the CFN algorithm.\\nMoreover, as the number of agents and grid dimensions increase, CFN becomes difficult to find enough patterns. \\nThe JFN series of algorithms become more effective. This is mainly because they adopt the divide-and-conquer idea. Each agent only needs to calculate the probability in its own action space, rather than searching in an exponential space.\\n\\n*(2)* There are two main reasons for the large L1-error. The first is the calculation sampling. In the multi-agent setting, there are a large number of grids that need to be used to calculate the L1-error. For example, the two-agent scene has 4096 grids, but only 16 samples are sampled per round. When calculating the index, we sampled 20 rounds, so the sampling value is much smaller than the number of grids, which will lead to a large L1-error. \\nWhen the number of sampling rounds is increased, the L1-error will be further reduced. When the number of rounds increases to 2000, the normalized L1-error indicator decreases to less than 1.\\nBut this will increase the additional calculation overhead.\\nThe second reason is the magnitude of the value. Different from the standard empirical L1 error, we used normalized L1-error, i.e., $\\\\mathbb{E}[|p(x)-\\\\pi(x)|] \\\\times N$, where $p(x)$ is the density of the target $x$, and $N$ is the number of target.\\nAs the number of final targets with rewards increases, the density of each target will become relatively smaller. In order to visualize the data, an additional scale of the number of grids is multiplied when calculating the L1-error. The actual L1-error is on the order of $10^{-4}$.\\n\\n*(3)* For the 3m scenario, we use it as an example to illustrate the ability of using the generative flow model as a decision model in large-scale decision-making.\"}", "{\"comment\": \"Thank you very much for the reviewer's response, the response you saw earlier was an incomplete version, we have added the complete version. To be specific, we rewrote the entire notation, then explained the need for such a definition, and gave examples to explain the specific meaning of the different motations. We hope you will re-evaluate these responses and thank you again for your time.\"}", "{\"comment\": \"To begin with, we thank you for the time you took to write this detailed review.\\n\\n**Response to Question 1:**\\nFirst of all, we generalize the measure GFN framework to the multi-agent framework. In order to reflect the differences of measure GFN framework, we redescribe different algorithms in Li et.al 2023 to serve as the basis for subsequent theoretical analysis. Moreover, we provide the theoretical setting of Global-local principle which justifies the key contribution of the aformentionned work: joint flow based training. Our theory (a) justifies the algorithm with a local-global principle (b) describe shortcomings of this algorithm (c) provide an extension solving these shortcomings via conditionning. \\n\\n**Response to Question 2:** \\nRegarding cycles, we leverage non-acyclic losses defined by Brunswic et al [1]. This prior work provides theoretical account of the acyclic limitation of GFN and how to bypass it via so-called stable losses.\\n\\n**Response to Question 3:** \\nThere are two main reasons for the large L1-error. The first is the calculation sampling. In the multi-agent setting, there are a large number of grids that need to be used to calculate the L1-error. For example, the two-agent scene has 4096 grids, but only 16 samples are sampled per round. When calculating the index, we sampled 20 rounds, so the sampling value is much smaller than the number of grids, which will lead to a large L1-error. \\nWhen the number of sampling rounds is increased, the L1-error will be further reduced. When the number of rounds increases to 2000, the normalized L1-error indicator decreases to less than 1.\\nBut this will increase the additional calculation overhead.\\nThe second reason is the magnitude of the value. Different from the standard empirical L1 error, we used normalized L1-error, i.e., $\\\\mathbb{E}[|p(x)-\\\\pi(x)|] \\\\times N$, where $p(x)$ is the density of the target $x$, and $N$ is the number of target.\\nAs the number of final targets with rewards increases, the density of each target will become relatively smaller. In order to visualize the data, an additional scale of the number of grids is multiplied when calculating the L1-error. Moreover, the L1-error is on the order of $10^{-4}$.\\nThe corresponding comparison has been made on hypergrid. Since the multi-agent problem can also be regarded as a single agent, as the dimension and the number of agents increase, the original GFlowNets often have difficulty solving the above problems.\\n\\n**Response to Question 4:** We use 3m scenario to verify the performance of the proposed algorithm. Although this task is the simplest, it is already more complex than the existing research work. In addition, this task is a typical winning-oriented task, which is slightly different from the goal of GFlowNets, but it can still illustrate the fitting ability of the proposed algorithm to the reward distribution.\\n\\n[1] Leo Maxime Brunswic, Yinchuan Li, Yushun Xu, Shangling Jui, Lizhuang Ma. A Theory of Non-Acyclic Generative Flow Networks. AAAI 2024\", \"title\": \"Response to Reviewer eWbR\"}", "{\"title\": \"Response to Reviewer ZXFS (part 2)\", \"comment\": \"**Response to Comments 1,2,3:**\\nComments1. 2. and 3. are actually related to the same misunderstanding. The action space is a fiber bundler over the state space it the space of couples (position,action). Why is that? The actions available to an agent may depend on the state it is in (say the agent is on the edge of a grid, the move beyond the grid limit is not possible). Therefore, to each state $s$ correspond available actions $a$ and $S^{-1}(a)$ the set of such actions. $S$ is simply the projection from $(s,a)$ to $s$.\\nThe formalism introduced aims at being general but in practice (and in the whole work), we assume that observations contain the whole information, we may thus identify $\\\\mathcal S$ to $\\\\prod_{i\\\\in I} \\\\mathcal O^{(i)}$. \\nThe transition map $T$ takes an element of the Action fiber bundle, ie a couple $(s,a)$. It thus depends on both state and action. \\nFinally, with $Id:\\\\prod_{i\\\\in I}\\\\mathcal O^{(i)}\\\\rightarrow\\\\prod_{i\\\\in I}\\\\mathcal O^{(i)}$ the identity map, the equation $\\\\prod_{i\\\\in I} p^{(i)} \\\\circ S \\\\circ \\\\pi = Id$ means that starting from observation $(o^{(i)})_{i\\\\in I}$ one may apply the combined policy $\\\\pi$ to get an action (more precisely a couple state-action), then forget the action to get a state (via the state map $S$) and then recover the observations via the observation projections. This composition should yield the same observation as those we began with. Despite being obvious in practice, it is a necessary mathematical assumption. \\n\\n**Response to Comments 4:** \\nIndeed, our target consists in sampling states proportionally to the reward the same way a usual GFN would and the same way the centralized MA-GFN does. We added a Problem Formulation section to clarify this in the core of our paper. We clarify how Theorem 2 combined with Theorem 1 answers our problem formulation.\\n\\n**Response to Comments 5,6:** \\nIndeed, the local rewards are untractable, that's actually a key difficulty of localizing GFNs. They are only used abstractly and in the independent MA-GFN algorithm. And yes, even though GFN could \\\"in principle\\\" work with stochastic reward (say by targeting the expectancy of the reward instead of the random value), and even though MSE-based FM-loss are minimized on this target, to my knowledge attempts were not successful. The point of our work is to go beyond that by training the collective of MA-GFN on the deterministic reward by enforcing a FM-property of an abstract global GFN.\\n\\n**Response to the Second Concern:**\\n We explain this situation from two aspects. First, the main goal of the MA-GFlowNets method is not to achieve higher reward benefits, but to discuss how to retain the characteristics of GFlowNets in a multi-agent setting, that is, the degree of fit between the sample distribution and the reward distribution. Our verification also illustrates this point. Whether it is Hyper-Grid or 3m scenes, it can sample the area of suboptimal rewards.\\nSecondly, the tasks under starcraft are usually win rate-oriented, which is somewhat different from the goal of MA-GFlowNets. Our experiments show that MA-GFlowNets has the potential to solve large-scale decision-making problems while ensuring diversity.\"}", "{\"comment\": \"I thank the authors for their response. However, my concerns regarding the clarity of the text and the presentation of experimental results still remain, thus I decided to keep my score.\"}", "{\"comment\": \"To begin with, we thank you for the time you took to write this detailed review.\\n\\n**General Response:**\\nWe are sorry for the misunderstanding caused by the motations in this paper. In order to solve this problem, we first modified the multi-agent setting and GFlowNets from the perspective of measure theory, and explained the misunderstood notations in detail. Then we add a section in the appendix, called **An Introduction for Notations**, which illustrates the necessity of this definition, in the sense that it improves the generality of notations. The structure of Hyper-grid is given to explain the definition of symbols such as policy and flow function.\\n\\n*Notation Motivation:*\\nTo begin with, our motivation to formalize the action space as a measurable bundle $\\\\mathcal{A} := \\\\{(s,a) | s \\\\in \\\\mathcal{S}, a\\\\in \\\\mathcal{A}_s\\\\}\\\\xrightarrow{S} \\\\mathcal{S}$ is three fold:\\n\\n1) The available actions from a state may depend on the state itself: on a grid, the actions available while on the boundary of the grid are certainly more limited than while in the middle. More generally, on a graph, actions are typically formalized by edges $s\\\\xrightarrow{a} s'$ of the graph, the data of an edge contains both the origin $s$ and the destination $s'$. In other words, on graphs, actions are bundled with an origin state. It is thus natural to consider the actions as bundled with the origin state. When an agent is transiting from a state to another via an action, the state map tells where it comes from while the transition map tells where it is going.\\n\\n2) We want our formalism to cover as many cases as possible in a unified way: Graphs, vector spaces with linear group actions or mixture of discrete and continuous state spaces. Measures and measurable spaces provide a nice formalism to capture the quantity of reward on a given set or a probability distribution. \\n\\n \\n3) We want a well-founded and possibly standardized mathematical formalism. In particular, the policy takes as input a state and returns a distribution of actions. the actions should correspond to the input state. To avoid having a cumbersome notion of policy as a family of distributions $\\\\pi_s$ each on $\\\\mathcal{A} _ s$, we prefer to consider the union of the state-dependent action spaces $\\\\mathcal{A}:= \\\\bigcup_{s\\\\in \\\\mathcal{S}} \\\\mathcal{A}_s$ and define the policy as Markov kernel $\\\\mathcal{S}\\\\rightarrow \\\\mathcal{A}$. However, we still need to satisfy the constraint that the distribution $\\\\pi(s)$ is supported by $\\\\mathcal A_s$. Bundles are usual mathemcatical objects formalizing such situations and constraints and are thus well suited for this purpose and the constraint is easily expressed as $S\\\\circ \\\\pi(s) = s, \\\\forall s\\\\in\\\\mathcal{S}$. \\n\\\\end{enumerate}\", \"our_synthetic_formalism_comes_with_a_few_drawbacks_due_to_the_level_of_abstraction\": \"1) The notation $\\\\pi(s)$ differs from the more common notation $\\\\pi(s, a)$ as the action already contains $s$ implicitly. \\n\\n2) We need to use Radon-Nikodym derivative. At a given state, on a graph, a GFlowNets has a probability to stop $$\\\\mathbb P(STOP | s) = \\\\frac{R(s)}{F _ {out}(s)}.$$ \\n On a continuous statespace with reference measure $\\\\lambda$, the stop probability is \\n $$\\\\mathbb P(STOP | s) = \\\\frac{r(s)}{f_{\\\\mathrm{out}}(s)}$$\\n where $r(s)$ is the density of reward at $s$ and $f_{\\\\mathrm{out}}(s)$ is the density of outflow at $s$. A natural measure-theoretic way of writing these equations as one is via Radon-Nikodym derivation: given two measures $\\\\mu,\\\\nu$; if $\\\\mu(X)=0 \\\\Rightarrow \\\\nu(X)=0$ for any measurable $X\\\\subset \\\\mathcal{S}$ then $\\\\mu$ is said to dominate $\\\\nu$ and, by Radon-Nikodym Theorem, there exists a measurable function $\\\\varphi \\\\in L^1(\\\\mu)$ such that $\\\\nu(X)=\\\\int_{x\\\\in X}\\\\varphi(x) d\\\\nu(x)$ for all measurable $X\\\\subset \\\\mathcal{S}$. This $\\\\varphi$ is the Radon-Nikodym derivative $\\\\frac{d\\\\nu}{d\\\\mu}$. \\n If one has a measure $\\\\lambda$ dominating both $R$ and $F _ {out}$ and if $F _ {out}$ dominated $R$ then \\n $$\\\\mathbb P(STOP | s) := \\\\frac{dR}{dF _ {out}}(s) = \\\\frac{dR}{d\\\\lambda}(s) \\\\times \\\\left( \\\\frac{dF _ {out}}{d\\\\lambda}\\\\right)^{-1}.$$\\n When $\\\\mathcal{S}$ is discrete, we choose $\\\\lambda$ as the counting measure and we recover the graph formula above. When $\\\\mathcal{S}$ is continuous, we choose $\\\\lambda$ as the Lebesgue measure and we recover the second formula.\", \"title\": \"Response to Reviewer ZXFS\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"metareview\": \"The paper introduces a theoretical framework for Multi-Agent Generative Flow Networks (MA-GFlowNets), extending generative flow networks to collaborative multi-agent settings. It proposes four algorithms: centralized, independent, joint, and conditional joint flow networks, aiming to balance centralized training with decentralized execution. While the approach is innovative and demonstrates promising experimental results, the paper lacks sufficient theoretical grounding and fails to clearly differentiate itself from existing work. Additionally, the experimental validation is limited, as it does not adequately explore generalizability across diverse tasks. These weaknesses, particularly the unclear novelty and limited experimental scope, lead to the recommendation for rejection.\", \"additional_comments_on_reviewer_discussion\": \"During the rebuttal period, reviewers highlighted concerns regarding the theoretical grounding of the Multi-Agent Generative Flow Networks (MA-GFlowNets), particularly the lack of rigorous justification for the proposed algorithms and their applicability to broader settings. They also noted limited experimental validation, as the benchmarks used were not diverse enough to demonstrate the generalizability of the approach. The authors responded with clarifications on their framework and provided additional experimental details but did not introduce new evidence or theoretical insights to sufficiently address these issues. These persistent gaps in theoretical rigor and experimental comprehensiveness were key factors in the final recommendation for rejection.\"}", "{\"comment\": \"**Response to Comment 1:**\\nThank you very much for your advice. One of our key contributions are two fold: *(a)* the generalization of the Measure GFN framework under multi-agent and *(b)* a theoretical account of the Joint Flow Loss introduced in Li et al [3]. \\n\\nIn particular, our theoretical setting is not limited to DAG or even to continuous setting with absorbing policy such as in Lahlou et al. [2], cycles are allowed via the use of stable losses. The formulation in this manuscript can be regarded as the extension of Bunswic et al. [3], providing a unified description of the algorithms introduced in \\\\cite{luo2024multi} in a more general setting. This allows us to provide a deeper description of the joint flow algorithm, its shortcoming and ways to solve them via conditional JFN. \\n\\nRegarding notation choices, our definition of GFlowNets is equivalent to the original formulation as well as that of [2] and [3]. We do not provide all the details of the equivalency but the appendix of [3] provides an equivalency between edgeflow formulation and measure-policy formulation. We are merely decomposition further to better distinguish the parametrizable part of the GFlowNet FlowInit-starpolicy- from say the reward. We modified the paper to smoothen this transition and provide a justification for our choice: in the multi-agent setting, local rewards and local edgeflow of a local agent depend on other agents. The theoretical frameworks becomes burdened with too many implicits and hidden relations between local GFlowNets. Our formulation allows to explicitly separate what the local agent actually controls from what depends on global, possibly intractable, information. Moreover, in some settings the reward may not be accessible during inference. The stopping condition based on the reward is then replaced by the virtual reward ie $\\\\hat R:=\\\\mathrm{ReLU}(F_{\\\\text{in}}-F{\\\\text{out}})$. Restricting to the starpolicy ensures the GFlowNets using the true reward or the virtual reward are more easily comparable. \\n\\n**Response to Comment 2:**\", \"gfn_in_the_multiagent_setting_may_be_realized_easily_in_two_contexts\": \"*(a)* if the reward is local, then the independent agent has their own independent policy given by a GFN. *(b)* if the reward is global with small communication costs (small observation encoding) and tractable global transitions.\\n\\nCentralized algorithm is the formalization of *(b)* while independent is the formalizaiton of *(a)* in our framework. We argue in the paper that the condition for a reasonable centralized algorithm is restrictive and that, in general, the reward is global: the Starcraft 3m task is an example where each marine has its own policy, but the reward depends on the state of all three marines at the end of the sequence. The goal of the JFN is to train local agents with independent GFN policies to fit a global reward.\\n\\n**Response to Comment 3:**\\nThe key property of the JFN is the decomposition of the action flow of an abstract global GFN as a product of local action flows. Such a property does allow detailed balance or Trajectory balance objectives. DB and FM loss are very closely related and mostly differ by the implementation choice of the backward policy (FM implements the backward policy by finding parents and computing the forward edge flow for each transition to the current state while DB implements an extra model, the backward policy, either fixed or trainable). Unfortunately, Brunswic et al [3] do not provide stable TB loss suitable for the non-acylic case such as the Starcraft 3m task.\\n\\n[1] S. Luo, Y. Li, S. Liu, X. Zhang, Y. Shao, and C. Wu, \\u201cMulti-agent continuous control with generative flow networks,\\u201d Neural Networks,vol. 174, p. 106243, 2024.\\n[2] S. Lahlou, T. Deleu, P. Lemos, D. Zhang, A. Volokhova, A. Hern\\u00b4andez-Garc\\u0131a, L. N. Ezzine, Y. Bengio, and N. Malkin, \\u201cA theory of\\ncontinuous generative flow networks,\\u201d in International Conference on Machine Learning. PMLR, 2023, pp. 18 269\\u201318 300.\\n[3] L. Brunswic, Y. Li, Y. Xu, Y. Feng, S. Jui, and L. Ma, \\u201cA theory of non-acyclic generative flow networks,\\u201d in Proceedings of the AAAI\\nConference on Artificial Intelligence, vol. 38, no. 10, 2024, pp. 11 124\\u201311 131\", \"title\": \"Response to Reviewer CN8s\"}", "{\"summary\": \"The paper presents a theoretical framework focused on adapting GFlowNets to multi-agent setting, building on the previously proposed theory of non-acyclic GFlowNets. Several training algorithms are proposed that can work in centralized and decentralized settings, and experimental evaluation is provided on both synthetic and real-world tasks.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"3\", \"strengths\": \"This work is one of the first to consider a novel setting of multi-agent GFlowNets, providing extensive theoretical framework and results.\\n\\nIt is known that RL algorithms can be applied to GFlowNet training [1], and this work is among the few [2] that explore the other direction \\u2014 applying GFlowNets in RL tasks.\", \"references\": \"[3] Shuang Luo, Yinchuan Li, Shunyu Liu, Xu Zhang, Yunfeng Shao, Chao Wu. Multi-Agent Continuous Control with Generative Flow Networks. Neural Networks, Volume 174, 2024\", \"weaknesses\": \"I have two major concerns about this works.\\n\\nMy first major concern is the clarity of the text. For the most part I did not understand the methodological and theoretical results of this paper. The main thing hindering readability is a combination of very heavy mathematical notation with a lack of consistency, clarity and correct order of definitions. Here are some examples:\\n\\n1) I did not understand what is state map $S$ (line 80) and what is its purpose. It is introduced in Section 2 but never used in the main text of the paper.\\n\\n2) Why does transition kernel (line 80) only depend on the action, not on state-action pairs? In standard RL and multi-agent RL formulations it depends on both.\\n\\n3) Can you please explain the equation $\\\\prod_{i \\\\in I} p^{(i)} \\\\circ S \\\\circ \\\\pi=\\\\mathrm{Id}$ (line 94)?\\n\\n4) The task that multi-agent GFlowNets try to solve is never formally defined. After reading Section 2 one can guess that it is sampling global states with probabilities proportional to the global reward, but the task has to be explicitly stated nevertheless.\\n\\n5) Local rewards $R^{(i)}$ appear in Section 2 (line 148), but their definition and connection to global reward is given only in the next section.\\n\\n6) Their definition given in Section 3 is $R^{(i)}\\\\left(o_t^{(i)}\\\\right):=\\\\mathbb{E}\\\\left(R\\\\left(s_t\\\\right) \\\\mid o_t^{(i)}\\\\right)$ (line 189), and they're said to be utilized in the local training loss. From what I understand, this expectation is intractable in the general case, so I do not understand how are they defined in practice. Authors mention that it is possible to use stochastic rewards instead, but as far as I am aware, GFlowNet theory introduced in previous works, upon which this paper builds, does not support stochastic rewards.\\n\\n7) On line 241, authors mention: \\\"At this stage, the relations between the global/joint/local flow-matching constraints are unclear, and furthermore, the induced policy of the local GFlowNets still depends on the yet undefined local rewards.\\\" In my humble opinion, if any novel definition/theorem/algorithm depends on some object, the object has to be previously introduced and properly defined in all cases.\\n\\nI believe that this paper could greatly benefit from using simplified notation in the main text (while the full set of definitions can be introduced and used in appendix), as well as major revision of Sections 2 and 3 to ensure that the problem itself and all objects we work with are properly defined and explained to the reader in proper order. \\n\\nMy second concern is related to the presentation of experimental results. The abstract states: \\\"Experimental results demonstrate the superiority of the proposed framework compared to reinforcement learning and MCMC-based methods.\\\" Conclusion also has a similar statement (line 470). While on toy synthetic hypergrid environment the proposed methods do show significant improvement over baselines, the results on a more interesting StarCraft task do not support this claim. The proposed JFN algorithm falls behind 3 out of 4 baselines and performs similarly to the remaining one (which is Independent Q-Learning, one of the simplest existing algorithms in multi-agent RL). I understand that the main contributions of this paper are theoretical and methodological, but neverhtless I suggest correcting the statements to faithfully reflect the presented results. I also understand that such metric as win rate may not favor GFlowNets compared to RL approaches, but then I would also suggest presenting some other quantitative metric to demonstrate the utility of the proposed approach in this task, e.g. some measure of diversity.\", \"questions\": \"0) See Weaknesses.\\n\\n1) Can you please give more detail on how the proposed framework and algorithms differ from the ones presented in [3]?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes a theoretical framework for multi-agent generative flow networks (MA-GFlowNets) and presents four algorithms: Centralized Flow Network (CFN), Independent Flow Network (IFN), Joint Flow Network (JFN), and Conditioned Joint Flow Network (CJFN). The authors introduce a local-global principle based on the principles in MARL that allows training individual GFNs as a unified global GFN. The authors evaluate their approach on Grid and SMAC tasks by comparing with MARL and MCMC approaches.\", \"soundness\": \"1\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"The paper is easy to understand (although some important details are missing as discussed in the next part), and the paper studies an important problem in extending GFlowNets to handle multi-agent tasks.\", \"weaknesses\": \"1. The paper's novelty claim is questionable. The authors state \\\"However, based on current theoretical results, GFlowNets cannot support multi-agent systems\\\" while ignoring relevant prior work, particularly Li et al. (2023) which already explored multi-agent GFlowNets.\\n2. The experimental evaluation has several limitations:\\n- Only uses the simplest 3m StarCraft II environment, and there is little performance improvement.\\n- Results in Figure 3 show very high L1 errors, which suggests poor learning. Doesn't demonstrate clear advantages over single-agent GFlowNets approaches.\\n- Little performance improvements over baselines\\n2. The paper doesn't adequately address the cyclic environment problem. GFlowNets traditionally work best in acyclic environments, but the paper doesn't explain how they handle cycles in StarCraft II scenarios.\\n3. The motivation for using MA-GFN in the chosen tasks is not well justified. Many of the presented problems could potentially be solved more effectively with single-agent GFlowNets approaches.\", \"questions\": \"1. How does the proposed approach differ from Li et al. (2023)'s work on multi-agent GFlowNets? Please clarify the novel contributions relative to this prior work.\\n2. How does the proposed method handle cyclic state transitions in StarCraft II environments, given that GFlowNets traditionally assume acyclic state spaces?\\n3. The L1 errors shown in Figure 3 are quite high. Could the authors explain why this occurs and how it affects the practical utility of the method? What specific advantages does the MA-GFN approach offer over single-agent GFN solutions for the presented Grid tasks? Could the authors provide experimental comparisons?\\n4. Why is it evaluated only on the simplest 3m StarCraft II scenario? Have the authors tested your approach on more complex multi-agent scenarios?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}" ] }
4o4fDJL6I7
Evaluating Ranking Loss Functions in Performance Predictor for NAS
[ "Han Ji", "Yuqi Feng", "Jiahao Fan", "Yanan Sun" ]
Performance evaluation is a critical but compute-intensive procedure in neural architecture search (NAS). To alleviate evaluation costs, performance predictors have been widely adopted to predict architecture performance directly. Recent studies have introduced ranking loss functions into predictors to focus on the architecture rankings instead of absolute accuracy, thus enhancing the ranking ability of performance predictors. Despite the successful application of ranking loss functions, the lack of comprehensive measure metrics and different experimental configurations make a fair comparison among these loss functions a huge challenge. Additionally, some well-known ranking loss functions have not been thoroughly examined in the context of performance predictors. In this paper, we conduct the first study for 11 ranking loss functions containing the existing and the novel ones by comparing their effectiveness in performance predictors under various settings. We find that: (i) The choice of ranking loss function has a major influence on the performance of predictors; (ii) the quality of the architectures searched by the predictor-based NAS methods is closely correlated with the predictor's performance on top-centered rank metrics, rather than traditional metrics like Kendall Tau. We believe these results and insights can serve as recommendations for the optimal loss function to employ in predictors across various search spaces and experimental conditions.
[ "Neural Architecture Search", "Performance Predictor", "Loss Function" ]
https://openreview.net/pdf?id=4o4fDJL6I7
https://openreview.net/forum?id=4o4fDJL6I7
ICLR.cc/2025/Conference
2025
{ "note_id": [ "r1Gy7js8PI", "Z2OFr2JKFU", "PeaH1uK9VR", "4Is0gaH8G6", "0B21dDk5Dn" ], "note_type": [ "official_review", "official_review", "official_review", "official_review", "comment" ], "note_created": [ 1729622777757, 1730393411024, 1730115236199, 1730488317515, 1731468781029 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission8767/Reviewer_mcru" ], [ "ICLR.cc/2025/Conference/Submission8767/Reviewer_ohRH" ], [ "ICLR.cc/2025/Conference/Submission8767/Reviewer_N7Nn" ], [ "ICLR.cc/2025/Conference/Submission8767/Reviewer_YugU" ], [ "ICLR.cc/2025/Conference/Submission8767/Authors" ] ], "structured_content_str": [ "{\"summary\": \"This paper aims to provide a comprehensive benchmark and detailed analysis of ranking loss functions for training performance predictors in neural architecture search (NAS). Specifically, the authors compare 11 ranking loss functions (including pointwise, pairwise, listwise, and weighted ranking loss) across 5 NAS search spaces and 13 corresponding NAS tasks. Notably, the authors employ various evaluation metrics, including global, rank-weighted, and Top-$K$ metrics, emphasizing the importance of using top-centered metrics for NAS tasks. Additionally, the authors evaluate the practical performance of performance predictors trained with each loss function on two NAS frameworks.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"The paper systematically studies the application of ranking loss functions in NAS, which is an important and noteworthy issue.\", \"The paper conducts fair performance comparisons among 11 ranking loss functions, including pointwise, pairwise, listwise, and weighted ranking loss, covering most types of ranking loss functions.\", \"The paper employs various evaluation metrics, including the traditional global metric Kendall Tau, as well as ranking-based metrics like Weighted Kendall Tau, Top-$K$ metrics N@$K$, and Rel@$K$, providing a comprehensive assessment of the loss functions' performance.\", \"The paper conducts extensive experiments to benchmark the effectiveness of ranking loss functions on NAS tasks, accompanied by detailed analysis.\", \"The structure of the paper is clear and straightforward.\"], \"weaknesses\": [\"However, before this paper can be accepted, I still have the following **major concerns**:\", \"**Presentation:** The definitions of the loss functions and evaluation metrics are very vague, which is detrimental to reproducibility. While some intuitive explanations are provided in Section 3, the lack of formal mathematical definitions in Appendix A is quite confusing.\", \"For example, in the definition of Weighted Approximate-Rank Pairwise (WARP) in Appendix A, the authors state, \\\"If the incorrect pair appears soon, we think the predictor is poor at ranking and will assign a large weight to this sample when calculating the loss\\\". How exactly is this weight calculated? I couldn't find the details anywhere in this paper.\", \"Another example is the even more ambiguous definition of metrics in Section 3.2. For instance, I can't understand the statement in Weighted Kendall Tau about \\\"There is a hyperbolic drop-off in architecture importance according to the descending order of accuracy\\\", or \\\"Rel@K computes the ratio of the accuracy of architecture $A_K$ to that of the best one $A_{max}$\\\". The authors should not shy away from using mathematical symbols and instead replace them with confusing textual descriptions --- at the very least, precise definitions should be available in the appendix.\", \"**Experimental settings:** I still have the following concerns:\", \"For a fair comparison, the authors use the same performance predictor setting, including the same learning rate `lr` and weight decay `wd` (Appendix B.2). However, this is inherently unfair for comparing loss functions, as different `lr` and `wd` lead to different losses. In fact, different losses are sensitive to `lr` and `wd`. For example, in information retrieval practices, pairwise loss typically requires a lower `lr` and `wd`, while listwise loss needs a higher `lr`. The authors should compare the loss functions across a wider range of hyperparameters and provide a sensitivity analysis to ensure a fair and comprehensive comparison.\", \"The authors test only on one performance predictor composed of a four-layer GCN encoder and a three-layer MLP, which is somewhat limited. I recommend that the authors conduct experiments on more types of performance predictors to verify the consistent performance of the loss functions across different networks.\", \"**Metrics:** The authors introduce various metrics to evaluate performance, emphasizing that Top-$K$ metrics are more effective for practical NAS tasks. However, there are additional Top-$K$ ranking metrics in recommender systems which need to be considered:\", \"NDCG and NDCG@$K$ are the most commonly used metrics in information retrieval and recommendation systems. Many ranking loss functions are designed based on them, which are fundamentally different from the accuracy-based metrics listed in the paper. In fact, with slight modifications, NDCG can be adapted for evaluation in NAS. Specifically, by sorting architecture-performance pairs $(x_i, y_i)$ according to the predicted performance $\\\\hat{y} _i$, DCG can be defined as $\\\\mathrm{DCG} = \\\\sum _{i = 1}^{N} (2^{y _i} - 1) / \\\\log _2(i + 1)$ . I suggest the authors consider more recommendation metrics to evaluate ranking loss functions.\", \"**Experiment Analysis:** The experimental analysis is generally thorough, but I have the following additional questions:\", \"In Section 4.1.1, the authors compare the effects of different loss functions on various NAS tasks and \\\"observe that no ranking loss functions consistently surpass other competitors across all 13 tasks. For instance, ListNet achieves the top-1 $\\\\tau$ in NAS-Bench-101 while having the lowest $\\\\tau$ in the TransNAS-Bench101-Micro Autoencoder task\\\". Why does this occur? Is it related to the dataset or task? A more insightful discussion is preferred.\", \"I suggest the authors summarize the criteria for choosing ranking loss functions after the experiments. Specifically, which type of loss function should be selected for a particular dataset size, NAS task, and training portion?\", \"Additionally, I have a few **minor concerns** that do not impact the score:\", \"All instances of @K should be written as @$K$ for consistency.\", \"Figure 1 should highlight the best results, perhaps using superscripts.\", \"The legend in Figure 2 obstructs the x-axis label \\\"Training Portion\\\".\", \"The caption for Figure 2 uses \\\"under various settings\\\", which is confusing. It could be changed to \\\"under different training and test portions\\\".\"], \"questions\": \"See the \\\"Weaknesses\\\" section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"Disclaimer: I have never worked on NAS, not sure why this paper was assigned to me. Providing a high-level, low confidence review.\", \"overview\": \"This paper studies ranking losses for training predictors used in neural architecture search (NAS). Specifically, a search algorithm uses a predictor to evaluate candidate architectures since proper evaluation is often very expensive. Several ranking losses are compared, including pointwise, pairwise, and listwise losses. The paper argues that using weighted losses, which place more weight on top-ranking architectures, as opposed to simply ranking them overall, yields better performance than other losses.\\n\\nA thorough comparison of several ranking losses for NAS may be an interesting contribution, but there could be some concerns regarding novelty.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The comparison of multiple ranking losses seems comprehensive, covering 11 losses.\", \"Some of the proposed weighted losses show promising NAS results.\", \"The paper is clearly written.\"], \"weaknesses\": [\"I am not an expert on NAS but adding a weighted ranking loss to previously proposed ranking losses may be somewhat Incremental (weighted vs. non-weighted), especially since improvement in performance compared to baselines seems rather small.\", \"The results are sometimes hard to interpret. For example, looking at Figure 1, it is hard to say if there is a loss which performs well across multiple tasks. Perhaps try a bar plot or a line plot? As another example, figures 2 and 4 show the winning loss for a combination of train portion, test portion, and task, and it is hard to identify clear trends in the multitude of results.\"], \"questions\": \"Suggestion: move the Loss Function in Table 4 to the second column from the left. Perhaps also move \\u201cSearch Method\\u201d to the third column.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper evaluates different ranking loss functions in performance predictors for Neural Architecture Search (NAS) and also draws some insights.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The paper is easy to read.\\n2. The experiments are comprehensive.\", \"weaknesses\": \"The paper conducts comparative experiments and analyzes existing loss functions for assessing the performance of Neural Architecture Search (NAS). However, the overall innovation and contribution of the paper are limited, with no new contributions in terms of evaluation methods, conclusions drawn, or new methods derived from the evaluation results. The two insights obtained through experiments also lack persuasiveness. The first insight, the importance of ranking loss in performance predictors, is widely recognized. It is precisely because people recognize the importance of ranking loss for NAS that there has been continuous iteration and the proposal of various ranking losses. The second insight, that ranking loss with excellent performance on top-centered rank metrics can help find high-quality architectures, is also quite straightforward. Does this insight imply that top-centered rank metrics should be used in the design of NAS methods? If the conclusion relies solely on experimental evaluation, can it stand up? Is there any theoretical support?\\n\\nI suggest that having a clear or more in-depth conclusion regarding the loss function would be more persuasive, such as what kind of model or predictor is suitable for what kind of ranking loss, or analyzing the mathematical principles of different loss functions to further propose what principles we should follow when designing ranking loss functions.\\n\\nOverall, I believe this paper does not make any special contributions in terms of experimental setup, conclusions drawn, and method design, and I think it does not meet the standards of ICLR.\", \"questions\": \"1. How can we utilize the conclusion \\\"ranking loss with excellent performance on top-centered rank metrics can help find architectures with high quality\\\" to guide the future design of NAS methods or the design of loss functions?\\n2. Can you explain the obtained insights from the mathematical essence of different loss functions?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper investigates the effectiveness of various ranking loss functions in performance predictors within Neural Architecture Search (NAS). In specific, this paper compares 11 ranking loss functions, including pointwise, pairwise, listwise, and weighted categories, across multiple search spaces and metrics to identify the most effective for NAS. The study finds that ranking loss choice significantly impacts predictor performance, particularly in discovering high-quality architectures. The paper finds that the top-centered metrics are better suited for NAS tasks than traditional metrics, emphasizing the predictor's ability to identify high-performing architectures. These findings help guide the selection of ranking losses, improving the efficiency and accuracy of predictor-based NAS methods.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": \"1. Comprehensive work with extensive experiments.\", \"weaknesses\": \"Overall, I personally find it\\u2019s hard to justify the potential impact of the work. This is not the first work studying about ranking losses in efficient neural architecture search or autoML in general. Ranking losses haven\\u2019t been widely used in NAS or autoML likely because still lack of significant and consistent gain from ranking losses in practice.\\n\\nIn addition, I found the following points making the paper hard to read and understand by general audience.\\n1. Mathematical definitions of both losses and metrics are missing, not even in appendix. I had to refer to other papers. Without math definitions, details of the metrics are hard to understand. For example, N@K is the lower the better, which is only mentioned in the caption of Figure 4, likely to confuse many readers at the beginning.\\n2. Color code of the results are confusing. For example, the color code in Figure 1 appears to highlight the very bad ones, like MSE on TB101_MACRO-AUTO dataset. However, I believe what\\u2019s more relevant is the best loss on each dataset. By just scanning, hard to see any pattern showing ranking losses superior on NAS.\\n3. Figure captions are not very informative. Important explanations are missing. For example, by just looking at Figure 2 and 4 and their captions, almost impossible to understand why colors are shown on Train-test portion grids. And thus hard to get what these colors are trying to tell.\", \"questions\": \"1. What is the scale of the N@k metric? My read of the definition is it\\u2019s the \\u201ctrue rank\\u201d of the architecture then it relies on the context, what are the architectures ranked together with the one on list and what is the size of the context?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}" ] }
4nrcn0YoDG
Global Identifiability of Overcomplete Dictionary Learning via L1 and Volume Minimization
[ "Yuchen Sun", "Kejun Huang" ]
We propose a novel formulation for dictionary learning with an overcomplete dictionary, i.e., when the number of atoms is larger than the dimension of the dictionary. The proposed formulation consists of a weighted sum of $\ell_1$ norms of the rows of the sparse coefficient matrix plus the log of the matrix volume of the dictionary matrix. The main contribution of this work is to show that this novel formulation guarantees global identifiability of the overcomplete dictionary, under a mild condition that the sparse coefficient matrix satisfies a strong scattering condition in the hypercube. Furthermore, if every column of the coefficient matrix is sparse and the dictionary guarantees $\ell_1$ recovery, then the coefficient matrix is identifiable as well. This is a major breakthrough for not only dictionary learning but also general matrix factorization models as identifiability is guaranteed even when the latent dimension is higher than the ambient dimension. We also provide a probabilistic analysis and show that if the sparse coefficient matrix is generated from the widely adopted sparse-Gaussian model, then the $m\times k$ overcomplete dictionary is globally identifiable if the sample size is bigger than a constant times $(k^2/m)\log(k^2/m)$ with overwhelming probability. Finally, we propose an algorithm based on alternating minimization to solve the new proposed formulation.
[ "Dictionary learning", "overcomplete", "sparse", "identifiability" ]
Accept (Poster)
https://openreview.net/pdf?id=4nrcn0YoDG
https://openreview.net/forum?id=4nrcn0YoDG
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xHlai0CUcl", "vULnlk7bVX", "ot9nd7NrEM", "mxrLrLi5GB", "gh1tqNvH6F", "ZNIkEZgiC3", "W4XaNQsFl8", "VPTztsyahx", "MhjxAFgLb8", "JkMGYye3YF", "JFoD3pkYdM", "HdERpJLbft", "FE7s79NXDO", "EMDf2TfRi7", "6RmAmktgGd" ], "note_type": [ "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "meta_review", "official_comment", "official_review", "official_review", "official_comment", "decision", "official_comment", "official_comment" ], "note_created": [ 1732524435643, 1732469180877, 1730400779878, 1732386238152, 1732459864852, 1731440570267, 1732542508753, 1735397196520, 1733154365921, 1730598987632, 1730189357679, 1732379916229, 1737523822370, 1732334571327, 1732681066440 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission7188/Reviewer_LGHw" ], [ "ICLR.cc/2025/Conference/Submission7188/Authors" ], [ "ICLR.cc/2025/Conference/Submission7188/Reviewer_BRUJ" ], [ "ICLR.cc/2025/Conference/Submission7188/Authors" ], [ "ICLR.cc/2025/Conference/Submission7188/Reviewer_LGHw" ], [ "ICLR.cc/2025/Conference/Submission7188/Reviewer_1DsL" ], [ "ICLR.cc/2025/Conference/Submission7188/Authors" ], [ "ICLR.cc/2025/Conference/Submission7188/Area_Chair_22Tt" ], [ "ICLR.cc/2025/Conference/Submission7188/Reviewer_LGHw" ], [ "ICLR.cc/2025/Conference/Submission7188/Reviewer_z7Po" ], [ "ICLR.cc/2025/Conference/Submission7188/Reviewer_LGHw" ], [ "ICLR.cc/2025/Conference/Submission7188/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission7188/Authors" ], [ "ICLR.cc/2025/Conference/Submission7188/Authors" ] ], "structured_content_str": [ "{\"comment\": \"Indeed, $\\\\Psi=I$ is one of the optima and Equation 4 holds only when $\\\\Psi=I$. Does this imply that the solution to equation 2 is unique?\"}", "{\"comment\": \"Thank you for your reply. What Lemma 1 tries to show is that as long as $(A_\\\\star,S_\\\\star)$ is *a* solution (possibly one of many), then (3) must hold. It is true that there may exist $\\\\Psi\\\\neq I$ such that $(A_\\\\star\\\\Psi,\\\\Psi^{-1}S_\\\\star)$ is also optimal, but it is *necessary* that $\\\\Psi=I$ is one of the optima. Lemma 1 is looking for a necessary condition for any optimal solution of (2).\"}", "{\"summary\": \"The paper proposes an approach for dictionary learning that uses a loss that mixes a modified, weighted version of the ell-1 norm of the mixture matrix coefficients (with different weights for different rows) with the volume of the dictionary matrix. It identifies a condition for successful identification of the mixing matrix called strong scattering. Similar to existing results, the likelihood of strong scattering for random mixing coefficient matrices such as sparse Gaussian, finding a scaling low for the number of vectors used in learning to scale like $\\\\mathcal{O}\\\\left(\\\\frac{k^2}{m} \\\\log \\\\frac{k^2}{m}\\\\right)$, where $k$ is the number of dictionary elements and $m$ is the data dimension. An alternating minimization algorithm for the proposed optimization is included as well.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The formulation appears novel and the analytical results are comprehensive.\\nA sound identifiability condition is presented.\", \"weaknesses\": \"As with other conditions for sparse learning and recovery, it appears that the required strong scattering condition cannot be efficiently checked.\\n\\nIt is difficult to assess how much stronger the sufficient scattering condition is versus \\\"that of complete dictionary learning\\\".\\n\\nSome specific arguments are not clear (see questions).\\n\\nA figure in the experimental section (cf. Line 466) is missing.\", \"questions\": \"Line 165: is a square power missing outermost in the second term? Why does this line imply $\\\\alpha = 1$?\", \"line_171\": \"Why is Assumption 1 reasonable? Is this equality always possible? If so, can that be shown as a lemma?\", \"line_188\": \"if $\\\\mathcal{B}_m \\\\subseteq \\\\mathcal{S}$, then isn't $\\\\mathcal{B}_m \\\\cap \\\\mathcal{S} = \\\\mathcal{B}$?\", \"line_246\": \"Assumption 4 has not yet been introduced - can you move the definition earlier?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": [\"Thank you for your positive assessment. Let us clarify the differences between some related works:\", \"Reference [3] is more similar to [Aharon et al., 2006b], [Hillar & Sommer, 2015], and [Cohen & Gillis, 2019], i.e., sparsity is imposed by directly minimizing the number of nonzeros of $S$. Their results are similar too: the Kruskal's rank (or spark) of the dictionary is big enough (which is a mild assumption), and the sparse coefficient matrix $S$ contains enough combinations of the sparsity patterns. The latter is a very strong assumption, which requires a sample size that is factorial in $k$ and $s$.\", \"The sufficiently scattered condition in NMF is quite different from this paper. Since NMF assumes the sources to be nonnegative, the sufficiently scattered condition in [1,2] is an assumption that is applied to a set in the *nonnegative orthant*. There is a highly related version that applies to a set in the probability simplex. Dictionary learning is not constrained to be nonnegative, so the results in [1,2] cannot be directly applied. In [Hu & Huang, 2023a], a sufficiently scattered condition is proposed for complete dictionary learning, but this one applies to a set in the *hypercube*, which is illustrated in Figure 1(left). For overcomplete dictionary learning, which is the focus of this paper, the sufficiently scattered condition in the hypercube in [Hu & Huang, 2023a] is not enough to guarantee identifiability. The strongly scattered condition proposed in this paper, as illustrated in Figure 1(right), is shown to guarantee identifiability.\", \"In line 187, it is correct to write $\\\\mathcal{S}\\\\in\\\\mathcal{C}_k$, because we are only interested in sets that are contained in the hypercube. We will fix the figure in the experimental section too. Thanks for your careful review.\"]}", "{\"comment\": \"Many thanks for the authors' response. However, only some of the concerns have been addressed. In addition,\\n\\n1. If there are multiple optimal solutions to equation (2), there may exist a $\\\\Phi\\\\neq I$ such that $(A_*\\\\Phi,\\\\Phi^{-1}S_*)$ is also an optimal solution. Therefore, I think the proof of Lemma 1 is not rigorous.\"}", "{\"summary\": \"The paper introduces a new formulation for the overcomplete dictionary learning problem. The authors show global identifiability of the dictionary and sources up to permutation and scaling provided that the atoms are sufficiently sparse.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The paper seems mathematically sound (be careful with the dimensions, see the detailed comments below). Its positioning with respect to the existing literature should be better documented though. Two results appear as particularly related: Hu and Huang 2023 and Agarwal et al/ Rambhatla et al. It would help to have a clear discussion on the improvement of the paper compared to those results.\", \"weaknesses\": \"Sparse coding or sparse dictionary learning are not new\", \"questions\": [\"Detailed Comments:\", \"Maybe recall what complete and overcomplete (no orthogonality) dictionary mean\", \"Formulation (2) should be better introduced. Why is\", \"line 106, you say that A should be a dictionary that guarantees exact recovery of all s-sparse vectors. Do you mean that min ||x||_1 s.t y= Ax should have a unique solution for all s-sparse vectors?\", \"line 107, what is the cellular hull?\", \"you should clarify the notion of scattered cellular hull before introducing your results.\", \"Statement of Lemma 1 is misleading. First of all, from what I understand the weights d_{*c} reach the maximum of \\\\sum_c d_c ||e_cS_*|| under the constraint \\\\|d\\\\|\\\\leq m. Secondly, if the max is attained for (3), why not just optimize the l1 norm squared?\", \"Is it always possible to scale the columns of A_{\\\\#} and rows of S_{\\\\#} to satisfy (5). This is not obvious to me\", \"If I understand well you want the set S to be reduced to canonical vectors p? and S could include vectors that are not in the span of Q but all vectors in span(Q) must be of the form q/||q||?\", \"From your definition of B_m, the set is a subset of R^k (i.e. it is given by some linear combinations of the columns of Q). Moreover S is also a subset of R^k so how can the intersection of those subsets be a subset of R^m (i.e given by rows of Q)? Maybe you mean the columns of Q?\", \"line 158-159, I would add just one sentence, to explain that for the correlation to be maximum, you need the cosine of the angle between the vectors to be maximum which implies d_c = \\\\alpha \\\\|e_c^T S_*\\\\| for all c\", \"lines 164-165, there are alphas missing.\", \"line 178 and Figure 1. If I understand well, the set B_m is an intersection of spheres of dimension m. If my understanding is correct, I think it would be worth mentioning it somewhere because it looks as if the points clouds in Fig 1 have non empty inerior (especially the 2-strongly scattered one) while my guess is there are empty.\", \"lines 241-244, in your proof sketch, again if I understand well, you define your matrix Q from the left factor of the SVD of A_#. I.e. if you have A_# = U\\\\Sigma V^T, then you define Q as V. Then why not say it like that. I feel this is simpler and much more clear\", \"On line 248, you refer to assumption 4 which does not appear anywhere (the hyperlink does not work)\", \"line 251-253, shouldn\\u2019t the pseudo inverse be applied on the right of S_*, i.e. from line 252, the dimensions of W seems to be n\\\\times n to me. Moreover, what you need to project to have the decomposition of line 251 are the rows of S_* not the columns.\", \"One lines 268-269, if I\\u2019m not wrong you mulyiply both sides by S_# and not S_*\", \"On line 272, there is a transpose missing on the second A_#\", \"On line 272, the last equality in Equation (8) is not completely clear to me. Isn\\u2019t ||e_c^T S_*||_1 = ||w_c^T S_#||_1 and not ||e_c^T S_#||_1 ? why is ||w_c^T S_#||_1 = ||e_c^T S_#||_1 ? Does the relation follow from (5) and the fact that A_# = A_*D\\\\Pi ? It would help to have even a short additional explanation here.\", \"lines 303-305, I don\\u2019t understand the sentence. You say that the sparsity is implicitely implied in (5)? How come ?\", \"lines 302 - 303 should be rephrased. I think what you mean is that \\u201csparsity is required to have the strongly scattered condition used in the statement of Theorem 1\\u201d instead of \\u201csparsity is implied in Assumption 1\\u201d\", \"line 308 \\u201cdoes not necessarily mean that the sparse coefficients S_# is identifiable\\u201d \\u2014> \\u201care identifiable\\u201d ?\", \"lines 313 -320, Assumption 4 seems quite strong (or quite vague) on the dictionary. Is it easy to find such dictionaries? (I.e. you don\\u2019t provide any numerical illustration). It would be perhaps good to have a short comment such as the one at the beginning of section 2.3\", \"lines 339-340, \\u201cthe most crucial condition is assumption 3 that cell ..\\u201d \\u2014> \\u201cthe most crucial condition is assumption 3, or the fact that cell(S_#) should be generated \\u2026\\u201d?\", \"lines 341-342: \\u201cand show that when it satisfied assumption 3\\u201d \\u2014> do you mean \\u201cand show that it satisfies assumption 3\\u201d?\", \"Section 2.3., lines 337-346, I don\\u2019t really understand why, if you can make it work in the sparse Gaussian model, you can\\u2019t make it work in the Bernoulli Gaussian model. If the probability in the Bernoulli distribution is set to s/n, can\\u2019t you get a result similar to what you have with sufficient probability? Even if you can\\u2019t be at least s-sparse, isn\\u2019t \\u201cat least s-sparse\\u201d with sufficient probability enough?\", \"line 348 - 349 \\u201cif for every column of S\\u201d \\u2014> \\u201cif every column of S\\u201d\", \"lines 362-363 : \\u201cis equal to\\u201d or \\u201cequals\\u201d but not \\u201cequals to\\u201d\", \"line 380, I would remove the line \\u201cwhich is a good sign that the bound is tight \\u201d\", \"line 383 \\u201ceven if identifiability of S_# is not required\\u201d, what do you mean \\u201cis not required\\u201d? Aren\\u2019t all your result focusing on the identifiability of S_# ? i would remove the paragraph starting from \\u201cOn the other hand\\u201d because it makes everything unclear.\", \"line 389 - 390, the sentence \\u201cDue to the novel formulation (2) for overcomplete \\u2026\\u201d does not make sense either. Do you mean \\u201cWe will now design an algorithm for formulation (2) for which uniqueness (up to permutation and scaling) of the dictionary and sources was shown above\\u201d\", \"line 427 \\u201cwhich is not preferable as one step of an iterative algorithm \\u201d just remove.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"No it doesn't.\\n\\nUniqueness is the main goal of this paper, and is formally proven in $\\\\S2.2$; more specifically, Theorem 1 shows that $A$ can be uniquely recovered under Assumptions 1, 2, & 3, and Corollary 1 shows that both $A$ and $S$ can be uniquely recovered under Assumptions 1-4. At Lemma 1, not one assumption has been brought up yet, so uniqueness cannot be achieved (or worse, assumed) at this moment.\"}", "{\"metareview\": \"The paper studies the identifiability of sparse dictionary models in both the complete and overcomplete case. It derives deterministic conditions for global identification (up to permutation and scale), and shows that under a random-sparse-Gaussian model, an m x k dictionary is identifiable with high probability when the number of observations exceeds k^2 / m. The proposed approach studies the global minimizer of a novel objective function which combines the volume log det (AA\\u2019) spanned by the rows of the dictionary matrix and the maximum L1 norm of the rows of the sparse coefficient matrix S, and argues that when the coefficients are \\u201csufficiently scattered\\u201d on the hypercube, the target factorization is unique.\\n\\nAs described above, the paper gives a novel sufficient condition for global identifiability of sparse dictionary models (the sufficiently scattered condition). When applied to random coefficient models, this result significantly improves over existing identifiability results for overcomplete dictionary learning. Interestingly, the analysis guarantees the correct recovery of the dictionary even for A which L1 minimization does not recover all s-sparse coefficient vectors. The main limitation of the work is that it only guarantees identifiability - not recovery by a tractable algorithm. With that said, identifiability of sparse models remains a fundamental problem in representation learning; the paper advances the understanding of this problem with a novel condition and better rates for overcomplete models.\", \"additional_comments_on_reviewer_discussion\": \"Reviewers praised the paper\\u2019s mathematical soundness, noting that it gives a novel sufficient condition for global identifiability of sparse dictionary models, and noted connections between the sufficiently scattered condition for dictionary learning and corresponding conditions for nonnegative matrix factorization. The discussion clarified this connection, as well as the novelties of the paper wrt the literature on NMF.\\n\\nTwo main limitations were noted by reviewers. First, the sufficiently scattered condition may be challenging to check or verify in practice [BRUj,LGHw]. As the reviewers note, this condition can be verified mathematically for random coefficient models, but cannot be checked experimentally on real data. The second limitation of the work is that the paper does not provide computational guarantees: the theory verifies that the global optimum of the proposed objective function corresponds to the target dictionary. However, the proposed optimization formulation is nonconvex, and is not guaranteed to find a global optimum [BRUj].\\n\\nReviewers also noted that while this is mostly a theoretical paper, the experimental section could be improved (and is seemingly missing a figure).\"}", "{\"comment\": \"Thank you for answering my questions. I would keep my score, as the experimental results cannot convincingly validate the main claim.\"}", "{\"summary\": \"This paper addresses the identification problem in over-complete dictionary learning by introducing a new formulation. The authors primarily build on the analysis from [Huang & Hu, 2023], extending the concept of \\\"sufficiently scattered\\\" to the over-complete setting. By combining this extension with scaling and independence conditions for $A$ and $S$, the authors argue that \\\"sufficiently scattered\\\" serves as a sufficient condition for the identifiability of $A$ under the proposed formulation (2). Additionally, they provide a theoretical guarantee that this \\\"sufficiently scattered\\\" condition holds with high probability under the commonly used Bernoulli-Gaussian distribution.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The idea is well-motivated, and the problem is relevant to the community. While previous work typically relies on column incoherence for $A$, the authors propose a novel sufficient condition of $S$ for the global identifiability of the over-complete dictionary learning problem under their formulation. This is achieved by extending the \\\"sufficiently scattered\\\" condition from non-negative matrix factorization (NMF) to the context of dictionary learning.\", \"weaknesses\": \"1) The connection between the proposed \\\"sufficiently scattered\\\" condition and the conditions outlined in [3] remains unclear. Could the authors clarify this relationship?\\n\\n2) The paper appears to be incomplete. For instance, the figure for the experimental section is missing, and in line 187, it seems that $\\\\mathcal{S} \\\\subseteq \\\\mathbb{R}^k$ should be used.\", \"questions\": \"Given that the \\\"sufficiently scattered\\\" condition has been previously introduced in NMF and topic modeling, and that similar identifiability conditions appear in [1,2], could the authors discuss the specific technical challenges posed by applying this condition in the dictionary learning (DL) setting compared to the NMF/topic modeling context?\\n\\n[1] Kejun Huang, Nicholas D Sidiropoulos, and Ananthram Swami. Non-negative matrix factorizationrevisited: Uniqueness and algorithm for symmetric decomposition. IEEE Transactions on Signal Processing, 62(1):211\\u2013224, 2013.\\n\\n[2] Kejun Huang, Xiao Fu, and Nikolaos D Sidiropoulos. Anchor-free correlated topic modeling: Identifiability and algorithm. Advances in Neural Information Processing Systems, 29, 2016.\\n\\n[3] P. Georgiev, F. Theis and A. Cichocki, \\\"Sparse component analysis and blind source separation of underdetermined mixtures,\\\" in IEEE Transactions on Neural Networks, vol. 16, no. 4, pp. 992-996, July 2005\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper presents a novel formulation for dictionary learning with the dictionary matrix being overcomplete. Under certain conditions, the authors demonstrate that the novel formulation guarantees global identifiability on the overcomplete dictionary. Finally, the authors design an alternating optimization algorithm to solve the proposed formulation.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"It is impressive that the proposed formulation can guarantee global identifiability over dictionary learning with an overcomplete dictionary matrix under some conditions.\", \"weaknesses\": \"1. It is not easy to verify whether $A$ and $S$ satisfy the Assumptions 3-4. Hence, it is difficult to evaluate the practical applicability of the theoretical results.\\n2. The paper provides only a simple simulation experiment, and the results are somewhat unconvincing.\\n2. The theoretical results are related to the optimal solution to equation 2. However, the proposed optimization algorithm for solving equation 2 cannot guarantee convergence to a global optimum.\", \"questions\": \"1. In Lemma 1, it seems that $\\\\Phi=I$ only when the optimal solution to equation 2 is unique. Hence, if there are multiple optimal solutions, does Lemma 1 still hold? If not, how to demonstrate that the optimal solution to equation 2 is unique?\\n2. How to prove that $A$ in Assumption 4 must exist? In addition, note that $A$ needs to satisfy Assumption 1 as well.\\n3. In line 363, the authors state that they aim to check whether the optimal value of equation 12 equals to 1. However, Theorem 2 only gives the probability that the maximum value is greater than 1. What's the relationship between them?\\n4. Are optimization problems 14 and 2 equivalent? How to determine $\\\\lambda$?\\n5. For the synthetic experiment, using the estimation error to evaluate the algorithm's performance is somewhat unconvincing. It is more reasonable to show that there exist a permutation matrix and a diagonal matrix that can convert the learned dictionary into the real one. In addition, multiple experiments should be conducted to record the corresponding success probability. \\n6. Why didn't the authors compare the proposed algorithm with other dictionary learning algorithms in the experiment? Currently, only a simple experiment is available.\\n7. Where is the Figure mentioned in line 466?\\n8. Many sentences in Introduction overlap with Hu and Huang (2023a).\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your positive assessment. Regarding some of your concerns:\\n\\n> As with other conditions for sparse learning and recovery, it appears that the required strong scattering condition cannot be efficiently checked.\\n\\nSection $\\\\S2.3$ is dedicated to answer this question. Although checking this condition exactly is hard, a randomly generated $S$ from the sparse-Gaussian model will satisfy the strong scattering condition with very high probability as long as the sample size is more than $O((k^2/m)\\\\log(k^2/m))$. We hope the analysis in $\\\\S2.3$ gives readers some assurance that this is indeed a reasonable assumption in practice.\\n\\n>It is difficult to assess how much stronger the sufficient scattering condition is versus \\\"that of complete dictionary learning\\\".\\n\\nAn intuitive illustration of how much stronger the scattering condition is shown in Figure 1. Another way to see it is to check the sample complexity analysis. For complete dictionary learning with $k\\\\times k$ dictionaries, Hu and Huang [2023] showed that it requires $O(k\\\\log(k))$ samples. In this paper with overcomplete $m\\\\times k$ dictionaries, our sample complexity is $O((k^2/m)\\\\log(k^2/m))$. The sample complexity is indeed larger, but not by a lot.\\n\\n## Questions:\\n- Line 165: This line gives $\\\\alpha \\\\sum(\\\\cdots) =m$, while in line 161 we have $\\\\alpha^2\\\\sum(\\\\cdots)=m$, so these two equations imply $\\\\alpha=1$.\\n- Line 171: Yes, it is always possible to assume Assumption 1 by rescaling the columns of $A_\\\\natural$ and counter-scale the rows of $S_\\\\natural$. The proof will be identical to that of Lemma 1.\\n- Line 188: This is indeed a typo. It should have been $\\\\partial\\\\mathcal{S}$, i.e., the boundary of $\\\\mathcal{S}$. We have fixed it in the revision. \\n- Line 246: Indeed it should not have appeared here. Thank you for noticing. We have fixed it in the revision.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"comment\": \"Thank you for your time to assess our work. There seem to be some misunderstanding about the assumptions, so let us clarify them here.\\n\\n**Assumption 4**\\n\\nAssumption 4 is exactly the compressive sensing problem, i.e., trying to recover a sparse $s$ from underdetermined linear measurements $x=As$, where $A$ is a wide matrix (and known), via minimizing $||s||_1$. There have been numerous papers about it during late 2000s into early 2010s, most notably pioneered by Donoho and Candes, among many other well-known statisticians. A classical result states that a sparse $s$ is the unique solution if $A$ satisfies the restricted isometry property (RIP); furthermore, if $A$ is randomly generated from standard normal, then it satisfies RIP with high probability if $m=O(s\\\\log s)$. Since it is a well-studied problem, we did not emphasize a lot on this issue, but refer the readers to some seminal papers by Donoho and Candes.\\n\\n**Assumption 3**\\n\\nAssumption 3 is original to our paper, and we in fact consider it the biggest contribution. We recognize that it is not easy to verify this condition exactly, which is why we spend the entire subsection $\\\\S2.3$ to justify that if $S$ is generated from a probabilistic model called sparse-Gaussian, Assumption 3 will be satisfied with overwhelming probability if $k=O((m^2/k)\\\\log(m^2/k))$, thus giving some assurance that this is a reasonable assumption as long as the sample size is reasonably large.\\n\\n**Assumption 1**\\n\\nIn terms of the matrix factorization, Assumption 1 is without loss of generality, because if $X=AS$, we can always put a diagonal matrix $D$ and its inverse $D^{-1}$ in-between to have $X=ADD^{-1}S$. Assumption 1 asks for the diagonal matrix $D$ so that the rescaled $A$ and $S$ satisfies equation (5). The reason we impose this scaling is because, as we showed in Lemma 1, any optimal solution to (2) also satisfies (3), making it easier to analyze identifiability.\\n\\n**Lemma 1**\\n\\nLemma 1 does not require the optimal solution to (2) to be unique. Lemma 1 tries to find out if there are some special properties of any optimal solution in terms of column/row scaling. The argument is that if $(A_\\\\star,S_\\\\star)$ is optimal to (2), then by applying column/row scaling to them is not going to further reduce the objective value of (2). In other words, by plug in $(A_\\\\star\\\\Psi, \\\\Psi^{-1}S_\\\\star)$ into (2) and treating $\\\\Psi$ as the only variable (while $A_\\\\star$ and $S_\\\\star$ are fixed), we cannot further reduce the objective value of (2). Obviously $\\\\Psi=I$ keeps the same objective value, so equivalently it means $I$ minimizes (2).\\n\\n**Formulation (14)**\\n\\nWhile formulation (2) assumes an exact, noiseless model $X=AS$, which is necessary for identifiability analysis, we recognize that in practice it is usually noisy, thus we modify the formulation by moving the constraint $X=AS$ as a data fidelity term $||X-AS||^2$ in (14). This is common practice, for example in compressive sensing people study identifiability by looking at $\\\\min ||s||_1$ subject to $x=As$ but in practice solve the lasso problem $||x-As||^2 + \\\\lambda||s||_1$. The choice of $\\\\lambda$ in principle should reflect the noise level, but in practice has to be tuned. \\n\\nSince this paper focuses on identifiability analysis, we admit the algorithm design part is somewhat premature. We hope the reviewer could understand that we cannot solve all problems in one paper. We believe the proposed new formulation (2) with identifiability guarantees would inspire many follow-up works, particularly in designing more effective algorithms to solve (2). Thank you again for your invaluable time.\"}", "{\"comment\": [\"Thank you for your positive assessment and careful reading. We will carefully revise the paper according to your constructive comments. Let us address a few comments that may benefit from some responses:\", \"Line 106: The reviewer's understanding is correct. We want the proposed formulation to be closely related to the compressive sensing problem, so that for the factorization model $X=AS$, if $A$ is uniquely recovered, then $S$ can be uniquely recovered as well. This is formally addressed in Corollary 1. This also relates to the discussion about Assumption 4, and yes, from the vast literature on compressive sensing, it is quite easy to obtain such a dictionary (for example, by randomly generating $A$ from i.i.d. Gaussian with $m=O(s\\\\log(k))$).\", \"Statement of Lemma 1: Indeed the formulation could also be $\\\\ell_1$ norm squared and Theorem 1 could still hold to identify $A$, but then to recover $S$ from the correct $A$, it is not clear whether optimizing $\\\\ell_1$ norm squared is able to identify $S$. By making it a weighted sum of $\\\\ell_1$ norms, it is possible to transform it into a compressive sensing problem and thus numerous prior results can be applied.\", \"It is always possible to satisfy (5), by replacing $(A_\\\\star,S_\\\\star)$ with $(A_\\\\natural,S_\\\\natural)$ in the proof of Lemma 1.\", \"Definition of $\\\\mathcal{B}_m$: this was indeed a rather serious typo. It should be $\\\\Psi Q q/||q||$, not simply $q/||q||$. Thank you for your careful reading.\", \"Line 272: Suppose the QR factorization of $A_\\\\natural^T =QR$ then $\\\\log\\\\det(A_\\\\natural W^{\\\\dagger}(W^{\\\\dagger})^T A_\\\\natural^T) = \\\\log\\\\det R^T + \\\\log\\\\det(Q^T W^{\\\\dagger}(W^{\\\\dagger})^T Q) + \\\\log\\\\det R = \\\\log\\\\det(A_\\\\natural A_\\\\natural^T) + \\\\log\\\\det(Q^T W^{\\\\dagger}(W^{\\\\dagger})^T Q)$.\", \"Line 303-305: that was indeed a typo. We mean sparsity is implicitly implied by Assumption **3**.\", \"sparse-Gaussian (SG) vs. Bernoulli-Gaussian (BG): In one step of the full proof, specifically line 865-866, we found that we do need the sparsity of every column of $S$ to be at most $s$, so SG model becomes much more handy to work with than BG. We suppose for BG we would need one more term in the bound to exclude the probability that $S$ contains even one column with more than $s$ nonzeros, and the resulting bound may not be so clean.\", \"Line 383: Theorem 1 shows that $A$ is identifiable under Assumption 1-3, and in this case $S$ is not necessarily identifiable (we need an additional Assumption 4 according to Corollary 1). We speculate that in practice, it is possible that one is only interested in finding the correct dictionary, but the specific sparse coefficients for all samples is not that important (maybe they are treated as training data and the user is more interested in checking the performance of the dictionary on some test samples), hence the sentence \\u201ceven if identifiability of $S_\\\\natural$ is not required\\u201d (but requires identifiability of $A_\\\\natural$).\"]}" ] }
4ndvumlZak
Closing the Gap between Neural Networks for Approximate and Rigorous Logical Reasoning
[ "Tiansi Dong", "Duo Wang", "Mateja Jamnik", "Pietro Lio" ]
Despite the historical successes of neural networks, the rigour of logical reasoning is still beyond their reach. Taking syllogistic reasoning as a subset of logical reasoning, we show supervised neural networks cannot reach the rigour of syllogistic reasoning, mainly because they use composition tables, which are coarse to distinguish each valid type of syllogistic reasoning and because end-to-end supervised learning may change the premises. As Transformer's Key-Query-Value structure is a combination table, we conclude that neural networks built upon Transformers cannot reach the rigour of syllogistic reasoning and, thus, cannot reach the rigour of logical reasoning. We logically prove that oversmoothing, in the setting of part-whole relations, can be avoided, if neural networks use region embeddings, and propose the method of reasoning through explicit constructing and inspecting region configurations, to achieve the rigour of logical reasoning.
[ "neural reasoning", "syllogistic reasoning", "Euler diagram", "composition tables", "rigorous reasoning" ]
Reject
https://openreview.net/pdf?id=4ndvumlZak
https://openreview.net/forum?id=4ndvumlZak
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zgiGI1p9RO", "yGJLJJv9u3", "yFouBjorO2", "vVeM0QNw7I", "vFiTuRlOI9", "uxGn7QMHdv", "tZfzJ6fKEE", "sfpcsmrWOh", "nIhnOHWXPY", "m4N4vCI9DO", "kdVgeo4D9F", "kXAXU65J7b", "j6bbJsBjhQ", "f9Uos0FdEJ", "eolhaTz20v", "e9WjImGu1F", "e5AdN3BLml", "Y0MvjagtnJ", "XzQSZg5xDt", "WmkqQmi9qa", "UBYhrQ37cC", "U4PiNZCjBp", "RWPBCkonzk", "QxrQcMG10g", "MUpFDkOgyQ", "KSdgsAbKqX", "JePqJkTZwJ", "JXIasQUYfR", "ECMX4HCuZB", "D4YfoyEDO0", "CJglUO3J9S", "CItAsSR0eW", "BGqCU5j782", "8ZBTkLsPkk", "5ZUwt0yNWo", "1YnUHbH4lu", "05cINqasR2" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "decision", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review" ], "note_created": [ 1732046921051, 1731575150901, 1732444023692, 1731668202434, 1731447584830, 1730470714618, 1731661694980, 1732226867309, 1733217136301, 1730573412848, 1731953596110, 1737523452986, 1732292516847, 1730634502166, 1731993254985, 1732542117967, 1732439917011, 1731582937805, 1731584782271, 1732964421122, 1731447625827, 1731534252331, 1731591901853, 1731447412305, 1732964595639, 1732475179552, 1732786505246, 1731671444774, 1732964194650, 1730618714832, 1732226809227, 1733224145447, 1732048465732, 1732964726111, 1732292359229, 1732416867089, 1734713613603 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission1442/Authors" ], [ "ICLR.cc/2025/Conference/Submission1442/Reviewer_rCFh" ], [ "ICLR.cc/2025/Conference/Submission1442/Reviewer_rCFh" ], [ "ICLR.cc/2025/Conference/Submission1442/Reviewer_rCFh" ], [ "ICLR.cc/2025/Conference/Submission1442/Authors" ], [ "ICLR.cc/2025/Conference/Submission1442/Reviewer_rCFh" ], [ "ICLR.cc/2025/Conference/Submission1442/Authors" ], [ "ICLR.cc/2025/Conference/Submission1442/Authors" ], [ "ICLR.cc/2025/Conference/Submission1442/Reviewer_rCFh" ], [ "ICLR.cc/2025/Conference/Submission1442/Reviewer_g989" ], [ "ICLR.cc/2025/Conference/Submission1442/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission1442/Authors" ], [ "ICLR.cc/2025/Conference/Submission1442/Reviewer_pFMU" ], [ "ICLR.cc/2025/Conference/Submission1442/Reviewer_83YV" ], [ "ICLR.cc/2025/Conference/Submission1442/Authors" ], [ "ICLR.cc/2025/Conference/Submission1442/Authors" ], [ "ICLR.cc/2025/Conference/Submission1442/Authors" ], [ "ICLR.cc/2025/Conference/Submission1442/Reviewer_rCFh" ], [ "ICLR.cc/2025/Conference/Submission1442/Authors" ], [ "ICLR.cc/2025/Conference/Submission1442/Authors" ], [ "ICLR.cc/2025/Conference/Submission1442/Authors" ], [ "ICLR.cc/2025/Conference/Submission1442/Authors" ], [ "ICLR.cc/2025/Conference/Submission1442/Authors" ], [ "ICLR.cc/2025/Conference/Submission1442/Authors" ], [ "ICLR.cc/2025/Conference/Submission1442/Authors" ], [ "ICLR.cc/2025/Conference/Submission1442/Authors" ], [ "ICLR.cc/2025/Conference/Submission1442/Authors" ], [ "ICLR.cc/2025/Conference/Submission1442/Authors" ], [ "ICLR.cc/2025/Conference/Submission1442/Reviewer_83YV" ], [ "ICLR.cc/2025/Conference/Submission1442/Authors" ], [ "ICLR.cc/2025/Conference/Submission1442/Authors" ], [ "ICLR.cc/2025/Conference/Submission1442/Authors" ], [ "ICLR.cc/2025/Conference/Submission1442/Authors" ], [ "ICLR.cc/2025/Conference/Submission1442/Authors" ], [ "ICLR.cc/2025/Conference/Submission1442/Reviewer_pFMU" ], [ "ICLR.cc/2025/Conference/Submission1442/Area_Chair_Yajy" ] ], "structured_content_str": [ "{\"comment\": \"Thank you very much for the constructive suggestions.\\n\\n> single green circles are out-of-distribution. \\u2026 When processing OOD data, the model's behavior can be erratic, producing unexpected outputs. \\\"there were input images with only one green circle.....\\u2026\\\"\\n\\nIf the regular inputs are images with two circles, images with a single\\ngreen circle will be out-of-distribution inputs and the model may have\\nerratic behaviour (we all know this.)\\n\\nHere, we used the method of maximisation activation (line 369) [1] and found\\nthat given particular single green circle inputs, Euler Net can have regular\\noutputs, e.g., the red circle is inside the blue circle. The only possible\\nexplanation is that the latent vector embeddings of two single green circles\\nare similar to one pair of regular inputs (one image with a red circle and a\\ngreen circle, the other image with a green circle and a blue circle), and\\nEuler Net used this pair of regular inputs to produce a regular output.\\nThis means that the well-trained Euler Net can automatically recognise the\\nwhole from the partial inputs. Recent research on object recognition\\nsupported this capability of Siamese network architecture (Section 3.1).\\n\\n\\nThis special out-of-distribution input let Euler Net demonstrate a desirable\\ncapability for object recognition and produce an unintended output for\\nlogical reasoning. \\n\\n> Do you mean no requirement for training data is an advantage of neural models?\\n\\nYes. no requirement for training data is an advantage for developing\\ncomputational models for reason [2-7], because (human) reasoning is a\\nprocess of model construction and inspection. A recent research shows that\\nthe basis of (human) reasoning is about possibility [8], while training data\\nis about probability that needs the stable-world assumption -- the training\\ndata and the testing data share the same distribution [10-11]. To develop\\nneural models for high-level cognition, we shall go beyond the scope of this\\nstatistic paradigm [10]. If not, here, we show the impossibility to reach\\nthe rigour of syllogistic reasoning. \\n\\n\\n[1] WojciechSamek, Gr\\u00e9goireMontavon, AndreaVedaldi, LarsKaiHansen, and\\nKlaus-RobertM\\u00fcller (eds.). ExplainableAI: Interpreting, Explaining and\\nVisualizing Deep Learning, volume 11700 of Lecture Notes in Computer\\nScience. 2019.\\n\\n[2] N. Johnson-Laird, R. M. J. Byrne, Deduction, Lawrence Erlbaum\\nAssociates, Inc., 1991.\\n\\n[3] M. Knauff, T. Fangmeier, C. C. Ruff, P. N. Johnson-Laird, Reasoning,\\nmodels, and images: behavioral measures and cortical activity, Journal of\\nCognitive Neuroscience 15 (4) (2003) 559\\u2013573\\n\\n[4] G. Goodwin, P. Johnson-Laird, Reasoning about relations., Psychological\\nreview 112 (2005) 468\\u201393.\\n\\n[5] M. Knauff, A neuro-cognitive theory of deductive relational reasoning\\nwith mental models and visual images, Spatial Cognition & Computation 9 (2)\\n(2009) 109\\u2013137.\\n\\n[7] M. Ragni, M. Knauff, A theory and a computational model of spatial\\nreasoning with preferred mental models, Psychological review 120 (2013)\\n561\\u2013588.\\n\\n[8] Johnson-Laird, P.N., Byrne, R.M.J. & Khemlani, S.S. Models of\\nPossibilities Instead of Logic as the Basis of Human Reasoning. Minds &\\nMachines 34, 19 (2024).\\n\\n[9] Mercier, D. Sperber, The Enigma of Reason, Penguin, 2018.\\n\\n[10] Anirudh Goyal and Y. Bengio. Inductive biases for deep learning of\\nhigher-level cognition. Proceedings of the Royal Society A: Mathematical,\\nPhysical and Engineering Sciences, 478, 10 2022.\\n\\n[11] Gerd Gigerenzer. How to Stay Smart in a Smart World: Why Human\\nIntelligence Still Beats Algorithms. The MIT Press, 2022.\"}", "{\"comment\": \"It is well known that NNs/transformers fail to reason out of distribution, see eg [1, 2, 3]. I don't debate this. My comment is on how this paper attempts to show that it is impossible to learn syllogistic reasoning, which I believe is poorly explained, incomplete and lacks evidence.\\n\\n> These single green circles are different from the standard inputs (two circles). \\n\\nYes, since they're OOD. It will act like the distribution it is trained on - this is not surprising at all. \\n\\n> Line 357: new randomly generated test data have different distributions from the training data.\\n\\nOf course I get that, but nowhere it is specified _how_ this distribution is different. \\n\\n> The theorem is solidly proved using region-based spatial logic. The proof shall be independent of model architectures.\\n\\nThe proof really needs more than this. There are hidden assumptions on what these neural architectures are and that they oversmooth (which again is undefined). \\n\\nI increased my confidence to reflect the rebuttal. \\n\\n[1] Zhang, Honghua, et al. \\\"On the paradox of learning to reason from data.\\\" Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence. 2023.\\n\\n[2] Berglund, Lukas, et al. \\\"The Reversal Curse: LLMs trained on \\u201cA is B\\u201d fail to learn \\u201cB is A\\u201d.\\\" The Twelfth International Conference on Learning Representations.\\n\\n[3] Mirzadeh, Iman, et al. \\\"Gsm-symbolic: Understanding the limitations of mathematical reasoning in large language models.\\\" arXiv preprint arXiv:2410.05229 (2024).\"}", "{\"comment\": \"I thank the authors for the detailed responses. Unfortunately, the responses do not address my core issues with the paper.\\n\\n> This method causes all network architectures, if they use composition tables, not to cover all types of valid syllogistic reasoning.\\n\\nSure, but a neural network can be a map with multiple outputs, which can cover all valid types of syllogistic reasoning.\\n\\n> They do not address the question whether the three statements form a valid reasoning.\\n\\nFormal languages can encode decision questions, such as \\\"Statement A, Statement B, Statement C?\\\" Which gets accepted if the ? is a yes or no. \\n\\n> Transformers suffer from oversmoothing when their depth increases.\\n\\nThanks for formalising this statement. Indeed: This assumption of increased depth is vital. However, the authors do not prove that deep transformers are needed: Can the single-hop task presented in this paper be solved with a single transformer layer? Since it's a finite task, I would assume so. \\n\\n> LLMs indeed perform well with syllogistic reasoning but have not reached the rigour of syllogistic reasoning [1,2].\\n\\nSure, but that's a different task than the one with circles studied in the paper, as it involves natural language. Furthermore, these papers do not show the impossibility of this task, which is what is argued in the paper under review. \\n\\n> These new data were generated by randomly choosing the centres and the radii of two circles (as long as they are complete in the image). Two images in the original dataset of Euler Net were described in Section 5.1 in the supplementary material \\u2013 line 389 \\u2013 395 \\u2013 they were also randomly generated, but with different constraints.\\n\\nThe paper would be improved if this was specified more clearly in the paper, for instance in the Appendix. \\n\\n> The idea is to implement a wrapper system that can decide whether the output of Euler Net is correct, and if not, this wrapper system will create a new piece of training data for this error. \\n\\nThe description here still raises many questions on how this is actually implemented. Pseudocode of this system would help. \\n\\n> If we allow NN to output multiple answers, NN will give each answer a probability. The sum of these probabilities will be less than or equal to 1. To reach the rigour of logical reasoning, NN should assign each answer with the probability 1. \\n\\nA neural network can have multiple binary outputs (multilabel classification), each giving the probability of the conclusion being true. These answers need not be exclusive. \\n\\n> RNNs\\u2019 Turing Complete (similar to Transformers) are about linguistic expressiveness, not about their reasoning ability.\\n\\nThis is incorrect. A Turing complete formalism can implement any computational reasoning task, in fact, Turing completeness is how computability is defined... \\n\\nFinally, in a comment to another reviewer, the authors shared:\\n\\n> Many researchers may assume they can, at least for syllogistic reasoning (you may see the comments of our last reviewer)\\n\\nI only ever referred to the **single-hop** \\\"syllogistic reasoning with circles\\\" task studied in the paper under review. In fact, I shared that I agree in general.\\n\\n> To be clear, I don't disagree much with the opinion on the feasibility of learning general reasoning.\"}", "{\"comment\": \"None of my concerns are properly addressed, so I'm keeping my score.\\n\\nTo be clear, I don't disagree much with the _opinion_ on the feasibility of learning general reasoning. The point is that this is not a debate on this problem. I act as a reviewer of the paper, which I do not believe is in a state that is ready for publication, as I believe the evidence presented is not strong enough and is not clearly described. \\n\\n> No. The problem is finite and appears simple, but cannot be solved by both weaker and complex architectures (without qualitative extensions).\\n\\nWith the right data, any finite problem can be solved by training NNs by just learning a direct input-output map.\\n\\n> Oversoomthing is defined in line 502-503 and also in line 507 again -- outputs converge to the same feature embedding.\\n\\nThat's not a formal definition. What is the notion of convergence? What are the outputs? What is the process with which they converge? As far as I know, oversmoothing converges in the depth, meaning limited-depth models do not oversmooth. Limited depth models can solve any finite problem; hence they can solve the _finite_ problem of syllogistic reasoning.\"}", "{\"comment\": \"Yes. The phenomena described in Section 4.1 are entirely different from the out-of-distribution scenarios. These single green circles are different from the standard inputs (two circles); in this way, they are out-of-distribution and will perform incorrectly, as we expect. Surprising is that the well-trained Euler-Net may automatically complete a single green circle into standard inputs. For object recognition, this is a great capability \\u2013 it can recognise objects by observing its partial image (we do not say, partial images are out-of-distribution inputs). But, for reasoning, this capability shall not be allowed, because the neural networks shall not add new premises.\\n\\nIf using (non-)vector or vector feature embeddings and the output embeddings oversmooth, then the converged output embedding must be a single vector feature embedding (a point). Or, put it this way: if feature embeddings are spheres with radii >=0, and output embeddings oversmooth, then their radii = 0. This means if we restrict radii > 0, oversmoothing will not happen. \\n\\nAfter researchers promote vector embeddings into spheres and introduce the method of reasoning using model construction, neural models achieve rigorous syllogistic reasoning without training data, see, Sphere Neural-Networks for Rational Reasoning https://arxiv.org/abs/2403.15297\"}", "{\"summary\": \"The authors study whether current neural networks can perform robust syllogistic reasoning via Euler diagrams, showing that they fail in very specific aspects, and conclude with arguments stating that neural networks need to go beyond vector embeddings to solve rigorous reasoning.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The paper is fairly well written, with some clear figures, especially in the revision. It presents multiple interesting ideas and experiments on syllogistic reasoning, a simple but easy-to-study problem.\", \"weaknesses\": \"I found it hard to follow what the contributions of this paper are. There are a few results that seem simple, arbitrary, poorly explained, and relevant only to a single network architecture. It is not clear to me what I should take home from these experiments.\\n\\nThe 'sketched proof' which is supposed to prove that transformers cannot do syllogistic reasoning also falls short: It assumes that they oversmooth, which only happens for transformers with many layers (the theoretical results are for the infinite-depth setting). If this happened consistently in practical transformer models, there is no chance LLMs could work as well as they do (as also Dovonon 2024 argues and shows, which is cited). \\n\\nTogether, this paper only provides meagre evidence for the infeasibility of syllogistic reasoning. Then the authors argue that different concept embeddings are needed, but do not compare (either theoretically or empirically) to the vector case, except for referring quickly to related work.\", \"questions\": [\"What is the motivation for specifically studying this Siamese Masked Autoencoder model? I suppose that this model does not use specific embeddings for each object (unlike models in object-centric learning, involving eg slot attention [1] or the method specific for this task as cited [2])\", \"Line 357: \\\"We fed new randomly generated test data' How is this data different?\", \"Line 359: What's the motivation for Euler Net version 2? The description of this method is extremely difficult to follow and incomplete. How does a model 'generate' input images?\", \"4.1, first paragraph. This lacks in details. Furthermore, it's well known that standard NNs are not adversarially robust. This connection is missing.\", \"4.2: I did not understand the point of this experiment. Of course a model will not be able to say anything meaningful about incorrect input data that we never defined how to respond to, especially if it's not designed for out of distribution detection.\", \"Line 428: This blanket statement is highly overclaiming these results. This is about misspecification - not a lack of learning capability.\", \"4.3: It is not clear to me how these combination tables are defined from a neural network point of view. Furthermore, this result again comes from the design of the neural network. If it's allowed to output multiple answers (for instance like an LLM would be able to), it may give all syllogistic conclusions.\", \"479 \\\"More powerful than vanilla RNN, LSTM\\\": From a theoretical perspective, this is hard to claim. RNNs (with unbounded time) are Turing Complete [3]. Similar results exist for Transformers, but these require an infinite 'scratchpad / chain of thought' [4]. I suppose this 'powerful' refers to an empirical interpretation, but this should be clarified.\", \"Theorem 1 is unclear and informal, and does not properly state its assumptions. What is oversmoothing? Output embeddings? \\\"will be points\\\"? Of course output embeddings are points. What are the assumptions on the model architecture? A quick look at the proof did not help me understand these questions. This certainly doesn't constitute a 'rigorous proof\\\" (Line 531)\", \"Similarly for Theorem 2, I have no idea what \\\"If the output embeddings are not points\\\" would mean.\", \"[1] Locatello, Francesco, et al. \\\"Object-centric learning with slot attention.\\\" Advances in neural information processing systems 33 (2020): 11525-11538.\", \"[2] Wang, Duo, Mateja Jamnik, and Pietro Lio. \\\"Abstract diagrammatic reasoning with multiplex graph networks.\\\" arXiv preprint arXiv:2006.11197 (2020).\", \"[3] Nowak, Franz, et al. \\\"On the representational capacity of recurrent neural language models.\\\" arXiv preprint arXiv:2310.12942 (2023).\", \"[4] Lena Strobl, William Merrill, Gail Weiss, David Chiang, Dana Angluin; What Formal Languages Can Transformers Express? A Survey. Transactions of the Association for Computational Linguistics 2024; 12 543\\u2013561.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We hope you have read our arguments and explanation. At this stage, we win the debates. Do you have further questions or anything hard to accept?\"}", "{\"comment\": \"Dear Reviewer,\\n\\nWe go through all your comments as follows (part 2).\\n\\n> 4.2: I did not understand the point of this experiment. Of course a model will not be able to say anything meaningful about incorrect input data that we never defined how to respond to, especially if it's not designed for out of distribution detection. \\n\\nHere, we show another limitation of supervised neural networks for reasoning \\u2013 we cannot exhaust unintended input data, as intended input data can automatically create them. We can extend the set of intended data, but the extended data will create new unintended data. \\n\\nOn the other hand, recent research shows the possibility of developing neural networks that can achieve the rigour of syllogistic reasoning without using training data [3].\\n\\n[3] T. Dong, M. Jamnik, P. Li\\u00f3 (2024), Sphere Neural Network for Rational Reasoning. https://arxiv.org/abs/2403.15297.\\n \\n> Line 428: This blanket statement is highly overclaiming these results. This is about misspecification - not a lack of learning capability.\\n\\nOur experiment shows that we cannot have a complete specification. We may specify them into one of the classes or define them as invalid. Each will generate new under-specified outputs. \\n\\n> 4.3 It is not clear to me how these combination tables are defined from a neural network point of view. Furthermore, this result again comes from the design of the neural network. If it's allowed to output multiple answers (for instance like an LLM would be able to), it may give all syllogistic conclusions.\\n\\nIf we allow NN to output multiple answers, NN will give each answer a probability. The sum of these probabilities will be less than or equal to 1. To reach the rigour of logical reasoning, NN should assign each answer with the probability 1. \\n\\n> Line 479. RNNs (with unbounded time) are Turing Complete [3]. Similar results exist for Transformers.. I suppose this 'powerful' refers to an empirical interpretation, but this should be clarified. \\n\\nRNNs\\u2019 Turing Complete (similar to Transformers) are about linguistic expressiveness, not about their reasoning ability. Given a finite set of words {Greek, human, mortal, X, Y, no, all}, RNN can decide \\u201call Greeks are human. no X are Y. all Greeks are mortal\\u201d are correct syllogistic statements. They do not address the question whether the three statements form a valid reasoning.\\n\\nYes. This \\u2018powerful\\u2019 is an empirical interpretation. We referenced a recent survey of Transformer\\u2019s applications in Line 484. \\n \\n> Theorem 1 is unclear and informal, and does not properly state its assumptions. What is oversmoothing? Output embeddings? \\\"will be points\\\"? Of course output embeddings are points.\\n\\nThe outputs of neural networks are not necessarily points. They can be boxes [4], Cones [5], Spheres [3].\\n\\nOur proof only focuses on the output representation and abstracts out the neural architecture. So, the theorem applies to any neural architecture, including Transformers.\\n\\n> Theorem 2, I have no idea what \\\"If the output embeddings are not points\\\" would mean.\\n\\n\\u201cthe output embeddings are not points\\u201d means that neural networks optimise extended geometric objects, instead of points, for example, boxes [4], Cones [5], Spheres [3].\\n\\n[4] H. Ren, W. Hu, J. Leskovec. Query2box: Reasoning over Knowledge Graphs in Vector Space Using Box Embeddings. ICLR, 2020.\\n\\n[5] Zhang, Zhanqiu and Wang, Jie and Chen, Jiajun and Ji, Shuiwang and Wu, Feng. ConE: Cone embeddings for multi-hop reasoning over knowledge graphs. NeurIPS 2021.\"}", "{\"comment\": \"I thank the authors for their extensive revision.\\n\\nOn a brief look, this revision addresses some of my concerns with the paper. Still, since this is a large change to the paper (\\\"almost rewrote the entire text\\\"), and it is not clear what changes are made in this revision. Therefore, I suggest the paper goes through another full round of reviews, so multiple people can have a thorough look at it. I updated my score to reject.\", \"ps\": \"I regret choosing the \\\"strong reject\\\" option initially. The number \\\"1\\\" next to it is unnecessarily harsh. My apologies for this.\"}", "{\"summary\": \"The paper discusses the \\\"dual-process\\\" theory of mind, highlighting the distinction between fast, intuitive thinking and slower, more deliberate thinking. It conclude that LLMs and\\nFoundation Models built upon Transformers cannot reach the rigour of syllogistic\\nreasoning. \\nThe article proposes a method of transforming syllogistic relationships into \\\"part-whole relationships\\\" and suggests using non-vector embeddings instead of traditional vector embeddings to avoid the problem of \\\"oversmoothing.\\\" Oversmoothing can cause the outputs of neural networks to converge to similar embeddings, thereby affecting the accuracy of reasoning.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"This paper attempts to analyze and study the reasoning capabilities of transformers, which is of great value. Additionally, the methods proposed in this paper possess certain innovative and theoretical significance.\", \"weaknesses\": \"1. This work lacks experimental validation and seems to be not fully complete.\\n\\n2. The article is not clearly written. The abstract and introduction are somewhat verbose, and the key innovations and objectives are not clearly defined.\", \"questions\": \"In fact, enhancing the inference capabilities of neural networks is a very challenging task. Will merely changing traditional vector embeddings yield significant improvements, or can it lead to substantial advancements?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"No Ethics Concerns.\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"> [1] Locatello, Francesco, et al. \\\"Object-centric learning with slot attention.\\\" Advances in neural information processing systems 33 (2020): 11525-11538.\\n\\nUsing Slot Attention mechanism, Euler Net will be more reliable to recognise the components in the input images and to separate intended inputs from unintended inputs. But, this will not change the content of the composition table (mapping intended inputs to outputs). In Figure 6, we show that composition tables can not cover all valid types of syllogistic reasoning.\\n\\n> [2] Wang, Duo, Mateja Jamnik, and Pietro Lio. \\\"Abstract diagrammatic reasoning with multiplex graph networks.\\\" arXiv preprint arXiv:2006.11197 (2020).\\n\\nThe Euler Net in this paper also used composition tables and covered 75% of valid types of syllogistic reasoning.\\n\\n> [3] Nowak, Franz, et al. \\\"On the representational capacity of recurrent neural language models.\\\" arXiv preprint arXiv:2310.12942 (2023).\\n\\n> [4] Lena Strobl, William Merrill, Gail Weiss, David Chiang, Dana Angluin; What Formal Languages Can Transformers Express? A Survey. Transactions of the Association for Computational Linguistics 2024; 12 543\\u2013561.\\n\\nThe two papers are about the linguistic expressive or representation powers of RNN and Transformers. Syllogistic statements are simple expressions for both. But, being able to determine whether statements are syllogistic statements does not follow being able to determine internal relations among these syllogistic statements.\\n\\nRoughly, given a finite set of words {Greek, human, mortal, X, Y, no, all}, NN can decide \\u201call Greeks are human. no X are Y. all Greeks are mortal\\u201d are correct syllogistic statements at the level of syntax. They do not address the question whether the three statements form a valid reasoning.\\n\\nYou raised a valuable issue \\u2013 we will reference your listed papers and separate our work from them.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"Thanks for the suggestion.\\n\\nWe conducted a validation experiment to show that using combination table Euler Net (EN) cannot cover all valid types of syllogistic reasoning, and will add it to the supplementary material and publish the new dataset.\\n\\nWe created a new dataset that covers all 24 valid types of syllogistic reasoning, to test a well-trained Euler Net (99.8% accuracy on the benchmark dataset).\", \"this_dataset_is_created_as_follows\": \"We group 24 _valid: syllogism types into 14 groups, as _no x are y_ has the same meaning with _ no y are x_; and _some x are y_ has the same meaning with _some y are x_. For each group, we created 500 test cases by extracting hypernym relations from WordNet-3.0, each test case consisting of one true conclusion and one false conclusion, totalling 14000 syllogism reasoning tasks.\\nIn the hypernym structure, _elementary\\\\_particle.n.01_ is a descendent of _natural\\\\_object.n.01_ and _artifact.n.01_ is not a descendent of _natural\\\\_object.n.01_. So, we create the true syllogistic reasoning as: If _all elementary\\\\_particle.n.01 are natural\\\\_object.n.01_, _no artifact.n.01 are natural\\\\_object.n.01_, then _all elementary\\\\_particle.n.01 are not artifact.n.01_. The false syllogistic reasoning will be : If _all elementary\\\\_particle.n.01 are natural\\\\_object.n.01_, _no artifact.n.01 are natural\\\\_object.n.01_, then _ some elementary\\\\_particle.n.01 are artifact.n.01_. \\n\\n\\\\begin{array}{|l|c|l|c|l|c|}\\n \\\\\\\\hline Valid\\\\ Type & Accuracy & Valid\\\\ Type& Accuracy& Valid\\\\ Type & Accuracy \\\\\\\\\\\\\\\\\\\\\\\\hline\\n BARBARA & 100\\\\\\\\% & BARBARI& 50\\\\\\\\% &BAROCO&66.7\\\\\\\\% \\\\\\\\\\\\\\\\\\\\\\\\hline \\n BAMALIP & 50\\\\\\\\% & BOCARDO& 75\\\\\\\\% &CALEMES &100\\\\\\\\% \\\\\\\\\\\\\\\\\\\\\\\\hline \\n CAMESTROS & 50\\\\\\\\% & CELARENT& 100\\\\\\\\% &CESARO &50\\\\\\\\% \\\\\\\\\\\\\\\\\\\\\\\\hline \\nCALEMO & 50\\\\\\\\% & CESARE& 100\\\\\\\\% & CELARONT&50\\\\\\\\% \\\\\\\\\\\\\\\\\\\\\\\\hline \\n DARAPTI & 100\\\\\\\\% & DARII& 75\\\\\\\\% &DISAMIS &75\\\\\\\\% \\\\\\\\\\\\\\\\\\\\\\\\hline \\nFESAPO & 100\\\\\\\\% & DATISI& 75\\\\\\\\% &DIMATIS&75\\\\\\\\% \\\\\\\\\\\\\\\\\\\\\\\\hline\\nFELAPTON & 100\\\\\\\\% &FERIO& 83.3\\\\\\\\% &FERISON &83.3\\\\\\\\% \\\\\\\\\\\\\\\\\\\\\\\\hline\\nCAMESTRES & 100\\\\\\\\% &FRESISON& 83.3\\\\\\\\% &FESTINO&83.3\\\\\\\\% \\\\\\\\\\\\\\\\\\\\\\\\hline \\nOverall&76\\\\\\\\% &&& \\\\\\\\\\\\\\\\\\\\\\\\hline \\n\\\\end{array}\"}", "{\"summary\": \"This paper proposes a task that converts syllogism into subset relations and then generates an image dataset that visualizes the subset relations and evaluates neural networks. The authors show in their experiments that although Euler Networks can learn part-whole relations between two entities, it cannot learn complex combinations of these relations, resulting in a lack of validity in the equivalent syllogism reasoning. Furthermore, the authors hypothesized that NNs should use one-hot representation to acquire the rigorous reasoning ability.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"The paper presents an important question that the community really cares about.\", \"The author shows the equivalence between syllogism reasoning and part-whole relations, and converted reasoning task into a visual prediction problem, which is interesting to me.\"], \"weaknesses\": [\"This paper still lacks enough experiments to support the authors' claims. Why would a one-hot representation save neural nets in reasoning soundness issues?\", \"The presentation of this paper could be further improved. The structure of it now looks more like a technical report. It lacks of figures and charts to present the experimental results.\", \"The discuss is high-level, while the technical detail or insufficiency of the compared methods are not discussed enough.\"], \"questions\": \"Please see above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"For each part of your answer:\\n- I am afraid your reply isn't convincing enough. \\n As you said, these single green circles are out-of-distribution. When processing OOD data, the model's behavior can be erratic, producing unexpected outputs. Therefore, the model can behave incorrectly on an image with a single green circle. \\n In line 371, you said, \\\"there were input images with only one green circle........\\\" It is important to clarify if this behavior is consistent. If not, and similar OOD images lead to different or inconsistent errors, this simply reflects the model's unpredictable behavior which is typical of OOD scenarios. \\n \\n- I would appreciate a more rigorous explanation, particularly with relevant citations or explanations for any key terms introduced.\\n \\n- I don't think your answer addresses my question. Do you mean no requirement for training data is an advantage of neural models?\"}", "{\"comment\": \"> I only ever referred to the\\u00a0single-hop\\u00a0\\\"syllogistic reasoning with circles\\\" task studied in the paper under review.\\n> A neural network can have multiple binary outputs (multilabel classification), each giving the probability of the conclusion being true. These answers need not be exclusive.\\n\\nIf we use multiple output labels for the valid reasoning \\u201call W are U. some V are W. therefore, some V are U\\u201d, in the case of \\\"syllogistic reasoning with circles\\\" task studied in the paper (Figure 6), we will map \\u201csome V are W\\u201d into three kinds of circle relations: (1) V circle inside W circle, (2) V circle partially overlaps with W circle, (3) V circle contains W circle, and map the conclusion (*) \\u201csome V are U\\u201d to the vector [1,1,1,0]. This means that \\u201ccircle W is inside circle U\\u201d and (3) \\u201cV circle contains W circle\\u201d will also map to [1,1,1,0]. This output is inconsistent with the logical conclusion and the logical consistency of the two premises (\\u201ccircle W is inside circle U\\u201d and \\u201cV circle contains W circle\\u201d). So, These answers need to be exclusive. That is another reason multiple labels will not work (achieve rigorous reasoning).\\n\\n> However, the authors do not prove that deep transformers are needed: Can the single-hop task presented in this paper be solved with a single transformer layer? Since it's a finite task, I would assume so.\\n\\nFrom line 486 to 493, we show the query-key-value table is a composition table automatically learned by a single transformer layer. If it is a vision transformer, the table can not enumerate all valid syllogistic reasoning types. If it is a transformer for symbolic inputs (sentences), it will do probabilistically and cannot promise to work correctly for out-of-distribution words. We will explicitly state this in the paper.\\n\\nIf anything in the explanations is still unclear, please let us know. Thank you.\"}", "{\"comment\": \"> Thank you for your answer, but this has nothing to do with logic soundness or validity. I don't think neural networks or statistical learning in general could guarantee soundness in logical reasoning, they are just probabilistic approximately correct.\\n\\nThank you for your valuable feedback. We also do not think supervised neural networks or statistical learning can guarantee soundness in logical reasoning. The aim of this paper is to argue why they cannot. \\nMany researchers may assume they can, at least for syllogistic reasoning (you may see the comments of our last reviewer) , in part because of the simple forms of syllogism and the Turing Complete of RNNs (with unbounded time). \\n\\n> But I think that Euler Network is a very special case in machine learning models. Your claim that it can learn one-hop part-of relational reasoning soundly (and in image representation) still lacks a theoretical guarantee.\\n\\nEuler Network is a special neural model in the literature designed particularly for syllogistic reasoning. It achieved 99.8% accuracy in the benchmark dataset. This is a statistical result -- If we create a new testing dataset having a different distribution from the benchmark, the performance of Euler Network will drop to 56% (line 358). We do not claim that Euler Net can learn one-hop part-of relational reasoning soundly (and in image representation). We agree that it can do this empirically and argue it cannot achieve rigorous syllogistic reasoning. \\n\\nYou probably mean, \\u201cEuler Network is a very special case, so we cannot draw a strong conclusion that supervised neural network cannot reach rigorous syllogistic reasoning\\u201d. The strategy of our argument is as follows: we analyse the reasons from this special case. One reason is the use of composition tables to establish premise-conclusion relations. The family of neural networks that use this method will not reach rigorous syllogistic reasoning. \\n\\n> Why would a one-hot representation save neural nets in reasoning soundness issues?\\n\\nLet us answer your original question, by comparing two formats of training data: one is ((input image 1, input image 2), output one-hot representation), the other is one is ((input image 1, input image 2), output image), one output one-hot representation will correspond to many possible output images. If one one-hot representation corresponds to 1000 images, the amount of the training data in the second format will be 1000 times more. An additional question is how to interpret this output image \\u2013 We need an extra network that maps an output image to a one-hot representation. In the design of Euler Net, developers included this extra network in the reasoning module.\"}", "{\"comment\": \"You totally misunderstand our argument. We do not argue it (NN) is impossible to learn syllogistic reasoning. What we argue is: it (NN) is impossible to reach the rigour of the symbolic level of syllogistic reasoning.\", \"the_symbolic_level_rigour_means\": \"for any syllogistic reasoning (two premises and one conclusion), the NN shall determine whether the three statements are satisfiable or unsatisfiable.\\n\\nFor a simple supervised NN, if it is confident for the reasoning \\u201call Greeks are humans. All humans are mortal. Therefore, all Greeks are mortal\\u201d, it will not be confident for the weaker reasoning \\u201call Greeks are humans. All humans are mortal. Therefore, SOME Greeks are mortal\\u201d. However, both are valid syllogistic reasoning. \\n\\nTo deduce the velocity to escape the earth gravity is approx 11km pro second, we do not need to examine the mechanics of bikes or cars (why they cannot). However, with this theoretical result, we can design rockets to escape the earth gravity and fly to the moon. \\n\\nThe proof is a complete logical deduction. We do not and shall not consider engineering aspects of existing neural architectures. With this theoretical result, we conclude (after negating the theorem) that one condition for NN to avoid oversmoothing (which is well-defined in the literature and we also referenced) is to use non-vector embedding. This is a pre-condition to reach the rigour (symbolic level) of logical reasoning (this level is much higher than the level discussed in the papers you listed). Our conclusion is consistent with the other research that we have referenced.\"}", "{\"comment\": \"> it (NN) is impossible to reach the rigour of the symbolic level of syllogistic reasoning.\\n\\nTransformers and RNNs are sufficiently powerful to represent logical reasoning **for fixed sizes of inputs** (which as far as I can tell is the problem you're studying in your paper). I shared existing expressivity results in my review on these architectures that go far beyond the complexity of this problem. Whether they will actually learn this is something else - that depends on your data and training setup (as I shared in the references in the comment). \\n\\n> However, both are valid syllogistic reasoning.\\n\\nThis is just how you setup your NNs and data. Natural language inference and models tackling this is the task of judging whether a reasoning step holds and allows for multiple reasoning steps to be valid. \\n\\n> We do not and shall not consider engineering aspects of existing neural architectures\\n\\nUp to you. The problem you study is finite and can be solved by much weaker architectures than ones that exhibit oversmoothing (infinite / deep transformers or GNNs). The condition (which again is not well-defined in the paper) of oversmoothing is thus much too strong to claim impossibility results.\"}", "{\"comment\": \"Dear Reviewer,\\n\\nthanks again for giving us feedbacks. We have uploaded a new version of the paper and hope all your concerns are addressed. We would very much appreciate your continued feedback.\"}", "{\"comment\": \"We are not sure whether we correctly understand your question. One-hot representation reduces the amount of training data, compared with using image representation.\"}", "{\"comment\": \"The takeaway is as follows:\\n\\nCurrent deep-learning systems cannot and will not reach the rigour of logical reasoning, no matter what kinds of and how much training data we use. To achieve the rigour of logical reasoning, traditional neural networks shall do qualitative extensions, namely, to promote vector embedding to non-vector embedding. \\n\\nThe \\u2018sketched proof\\u2019 does not prove transformers cannot do syllogistic reasoning. \\n\\nLLMs work very well, in terms of language communication, but this does not follow that they can reason well. see the reference below. \\n\\nEvelina Fedorenko, Steven T. Piantadosi, and Edward A. F. Gibson (2024). Language is primarily a tool for communication rather than thought. In Nature.\\n\\nThis paper takes syllogistic reasoning as the micro-world of rationality and shows current deep-learning systems cannot and will not reach the rigour of syllogistic reasoning. \\n\\nSiamese architectures are used for object recognition and for syllogistic reasoning. In both cases, they achieve excellent results. However, The phenomena described in Section 4.1 raise the problem -- These single green circles are different from the standard inputs (two circles). Surprising is that the well-trained Euler-Net may automatically complete a single green circle into standard inputs. For object recognition, this is a great capability \\u2013 it can recognise objects by observing its partial image (we do not say, partial images are out-of-distribution inputs). But, for reasoning, this capability shall not be allowed, because the neural networks shall not add new premises.\", \"line_357\": \"new randomly generated test data have different distributions from the training data.\", \"line_359\": \"The motivation is to let Euler Net improve its performance by itself. It is not difficult to create an image with two circles, given two centre points and two radii.\\n\\nThe theorem is solidly proved using region-based spatial logic. The proof shall be independent of model architectures.\"}", "{\"comment\": \"Our argument tells as long as Transformers and RNNs use (1) vector embeddings for concepts and (2) composition tables for mapping premises and conclusions, they will not reach the rigorr of the symbolic level of syllogistic reasoning. This is independent of what training data you use \\u2013 the example that we gave in the last response shows there are no consistent training data for logical conclusion and logical consistent conclusion.\\n \\n\\u201cexisting expressivity results in my review on these architectures that go far beyond the complexity of this problem\\u201d. \\nExactly, they are beyond the complexity of syllogistic reasoning. That is the reason we choose syllogistic reasoning to explore whether and how traditional NN (including Transformers and RNNs) can reach rigorous reasoning. If they cannot for simple syllogistic reasoning, as the micro-world for rational reasoning, they will not for other rational reasoning. \\n\\nSyllogistic reasoning is special \\u2013 it dominated the research of logic for 2000 years, and the research of rationality in psychology for over 100 years (till today, it is still not completely solved). Solving neural syllogistic reasoning is the first step to solve more complex reasoning problems. \\n\\n> This is just how you setup your NNs and data. Natural language inference and models tackling this is the task of judging whether a reasoning step holds and allows for multiple reasoning steps to be valid.\\n\\nNo. Syllogistic reasoning is atomic. We cannot break it into multiple steps, they cannot either. The current methods (using training data) cannot determine there is no model (unsatisfiability), and thus cannot separate satisfiable reasoning from valid reasoning and reach the rigor of symbolic level of logic reasoning. \\n\\n>. Up to you. \\n\\nNo, it is up to whether we want to be alchemists or modern engineers.\\n\\n>The problem you study is finite and can be solved by much weaker architectures than ones that exhibit oversmoothing (infinite / deep transformers or GNNs). The condition (which again is not well-defined in the paper) of oversmoothing is thus much too strong to claim impossibility results.\\n\\nNo. The problem is finite and appears simple, but cannot be solved by both weaker and complex architectures (without qualitative extensions). Oversoomthing is defined in line 502-503 and also in line 507 again -- outputs converge to the same feature embedding. \\n\\nNo matter whether the condition of oversoomthing is strong or not, our proof (the rigorous logic deduction) guarantees the results.\"}", "{\"comment\": \"Promoting traditional vector embeddings into manifold embedding is the first step. The second step is to introduce the method of reasoning as model construction, see, Sphere Neural-Networks for Rational Reasoning https://arxiv.org/abs/2403.15297\\n\\nHere, we show the limitations of (1) the vector representation, and (2) the method of reasoning through combination tables. Both prevent neural networks from achieving rigorous reasoning, which goes beyond the statistic metrics -- more data experiments will not help. Three statements being unsatisfiable (contradictory) is a topic of possibility, not probability -- no training data for deciding unsatisfiability.\"}", "{\"comment\": \"Dear Reviewer,\\n\\nthanks again for giving us feedbacks. We have uploaded a new version of the paper with more experiments and examples. We would very much appreciate your continued feedback.\"}", "{\"comment\": \"Thank you very much for the critical and precious comments.\\n\\n> a neural network can be a map with multiple outputs, which can cover all valid types of syllogistic reasoning.\\n> A neural network can have multiple binary outputs (multilabel classification), each giving the probability of the conclusion being true. These answers need not be exclusive.\\n\\nYes, we agree that an NN can have multiple outputs. After a softmax operation, each output will have a probability. This is consistent with our argument that they do not reach the rigour of logical reasoning. In the case of two output classes, each will have 50%. That is the probability of tossing a coin. \\n\\nA new issue is that these multiple outputs cannot distinguish logical conclusion from logical consistency. If the inputs are \\u201call Greeks are human. All humans are mortal\\u201d, the outputs are \\u201call Greeks are mortal\\u201d and \\u201csome Greeks are mortal\\u201d. The NN cannot learn that \\u201call Greeks are mortal\\u201d (logical conclusion) is stronger than \\u201csome Greeks are mortal\\u201d (logical consistency). This method follows that we need to teach the NN all logical consistencies. \\n\\n> Can the single-hop task presented in this paper be solved with a single transformer layer? Since it's a finite task, I would assume so.\\n\\nFollowing the above discussion, a single transformer layer with multiple outputs can solve syllogistic reasoning probabilistically. \\n\\n> Formal languages can encode decision questions, such as \\\"Statement A, Statement B, Statement C?\\\" Which gets accepted if the ? is a yes or no.\\n> This is incorrect. A Turing complete formalism can implement any computational reasoning task, in fact, Turing completeness is how computability is defined\\u2026\\n\\nYou are right. We agree that formal languages (in symbolic AI) can encode decision questions and a Turing complete formalism (in symbolic AI) can implement any computational reasoning task. We should have written clearly \\u2013 here, we talk about RNNs that can be Turing Complete (given unbounded time). As described in the paper [1], they (given unbounded computation time) can simulate any deterministic probabilistic Turing machine. But, it is still\\na non-deterministic automaton [1. pp. 7017], so, it will not reach the rigour of any logical reasoning. We will include this topic and discussion into the paper. \\n\\n[1] Nowak, Franz, et al. \\\"On the representational capacity of recurrent neural language models.\\\" arXiv preprint arXiv:2310.12942 (2023). \\n\\n> these papers do not show the impossibility of this task, which is what is argued in the paper under review.\\n\\nYes, these papers do not explicitly show the impossibility of LLMs to reach the rigour of syllogistic reasoning. But, these papers point out LLMs learn human errors in the training data. This implicitly follows that LLMs cannot reach the rigour of syllogistic reasoning. \\n\\n> The paper would be improved if this was specified more clearly in the paper, for instance in the Appendix. \\n\\nWe will elaborate and move this part from the current supplementary material into the Appendix. \\n\\n> The description here still raises many questions on how this is actually implemented. Pseudocode of this system would help.\\n\\nWe submitted codes (in pytorch) along with the paper. \\n\\nHere, we write some pseudocodes.\\n\\n---\\ngenerate_one_input (rel, colour1, colour2) \\n\\n```\\ncircle1 \\u2190 randomly generate circle 1 with a 2-D point c1 as the centre and a radius r1\\ncircle2 \\u2190 randomly generate circle 2, with the centre c2 and radius r2, satisfying the set-thoretic relation rel with the first circle\\nimage \\u2190 draw circle 1 (in colour 1) and circle 2 (in colour 2)\\nreturn circle1(c1,r1), circle2(c2,r2), image\\n```\\n\\n---\\ngenerate_one_new_training_data (EulerNet, rel1, rel2)\\n```\\ncircle1(c1,r1), circle2(c2,r2), image1 \\u2190 generate_one_input (rel1, red, green) \\ncircle1(c3,r3), circle2(c4,r4), image2 \\u2190 generate_one_input (rel2, green, blue)\\nexpected_output \\u2190 get_set_theoretic_relation_between(circle1, circle4)\\noutput_of_EN \\u2190 EulerNet(image1, image2)\", \"if___output_of_en_is_not_equal_with_expected_output\": \"return ((image1, image2), expected_output)\", \"else\": \"return empty\\n```\\n\\n---\\ncollect_new_training_data(EulerNet, M)\\n```\\ncount = 0\\ntraining_data = []\", \"for_each__rel1_in_the_four_set_theoretic_relations\": \"\", \"for_each__rel2_in_the_four_set_theoretic_relations\": \"new_data \\u2190 generate_one_new_training_data (EulerNet, rel1, rel2)\", \"if_new_data_is_not_empty\": \"count += 1\\n\\t\\t\\ttraining_data.append(new_data)\\n\\t\\tif count == M: \\n\\t\\t\\treturn training_data\\n```\\n\\n---\\nautomatic improvement of EulerNet for N times. \\n```\\nversion = 0\\nEN_0 \\u2190 EulerNet\\nwhile version < N:\\n\\ttraining_data \\u2190 collect_new_training_data(EN_version, M)\\n\\tEN_new \\u2190 train EN_version using training_data\\n\\tversion += 1\\n\\tEN_version = EN_new \\nreturn EN_version \\n```\"}", "{\"comment\": \"Dear Reviewer rCFh,\\n\\nThank you again for your constructive feedback. We almost rewrite the whole text and upload a new version, hoping all your concerns are addressed.\"}", "{\"comment\": \">With the right data, any finite problem can be solved by training NNs by just learning a direct input-output map.\\n>Limited depth models can solve any finite problem; hence they can solve the\\u00a0finite\\u00a0problem of syllogistic reasoning.\\n\\nNo. Here, we clearly argued that there is no right data to reach the rigour of symbolic level of syllogistic reasoning. We have explained this. It seems you neglected it. Again, taking two simplest syllogistic reasoning types: \\n\\n\\u201call Greeks are humans. All humans are mortal. Therefore, ALL Greeks are mortal\\u201d\\n\\n \\u201call Greeks are humans. All humans are mortal. Therefore, SOME Greeks are mortal\\u201d\\n\\nWhat is the direct input-output map? \\n\\n\\n> That's not a formal definition.\\u00a0\\n\\nThe formal definition of oversmoothing is abstracted into all output embedddings are coincided and described in the supplementary material.\"}", "{\"comment\": \"Dear Reviewer,\\n\\nthanks again for giving us feedbacks. We uploaded a new version of paper. We would very much appreciate your continued feedback.\"}", "{\"summary\": \"The authors highlight the limitations of neural networks, including large language models (LLMs), in achieving rigorous syllogistic reasoning, which is essential for logic and human rationality. They argue that these networks should avoid combination tables and instead use non-vector embeddings to prevent oversmoothing. The paper reviews the Siamese Masked Autoencoder and presents experiments demonstrating that models relying on combination tables cannot attain 100% accuracy in syllogistic tasks. However, using non-vector embeddings as computational building blocks can help neural networks avoid oversmoothing. This work aims to bridge the gap between neural networks for approximate and rigorous logical reasoning.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"1\", \"strengths\": [\"The authors substantiate their claims with experimental results, showcasing the shortcomings of existing models, such as the Siamese Masked Autoencoder, in achieving high accuracy in syllogistic reasoning.\", \"The paper opens avenues for further exploration, encouraging researchers to develop architectures that can effectively address rigorous reasoning tasks.\"], \"weaknesses\": [\"The authors claim three main contributions, and there are corresponding weaknesses for each:\", \"**Contribution 1:** The authors conduct an experiment in Section 4. However, the experiments in Sections 4.1 and 4.2 appear to primarily test neural models' performance on out-of-distribution inputs. The poor performance of neural models on out-of-distribution inputs is already well-documented, which limits the novelty of this contribution.\", \"**Contribution 2:** The use of combination tables is discussed in Section 4.3, but this section is confusing. For example, the authors state that the combination table only generates the conclusion \\\"all V are U\\\" is not enough, since it misses the conclusion \\u201csome V are U.\\u201d However, the statement \\\"all V are U\\\" clearly describes a part-whole relationship, and \\\"some V are U\\\" can be derived from \\\"all V are U.\\\" The authors did not explain why this senario is worse.\", \"**Contribution 3:** The authors discuss this in Section 5 (lines 502-519), but the proof is unclear. For example, it's unclear how the two theorems prove \\\"using non-vector feature embedding to avoid oversmoothing\\\". Additionally there lacks empirical studies to support it.\"], \"questions\": \"1. Are the phenomena described in Section 4.1 distinct from typical out-of-distribution scenarios?\\n\\n2. In Section 5 (lines 502-519), what is the relationship between using (non-)vector feature embeddings and output embeddings being points?\\n\\n3. Given that symbolic approaches are effective for syllogistic reasoning, why is it necessary for neural models to also support rigorous reasoning? In Section 2.1 (line 181), the authors argue that \\\"symbolic approaches neither explain how symbols emerge from our neural minds nor capture the ways humans reason in daily life.\\\" Can neural models genuinely achieve these objectives?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Reviewer,\\n\\nWe go through all your comments as follows (part 1). \\n\\n>There are a few results that seem simple, arbitrary, poorly explained, and relevant only to a single network architecture. \\n\\nFrom this single network architecture, we generalise the limitation of combination tables -- they can not cover all types of valid syllogistic reasoning. This method causes all network architectures, if they use composition tables, not to cover all types of valid syllogistic reasoning. \\n\\n> It is not clear to me what I should take home from these experiments.\\nHere, we show that existing neural network architectures cannot reach rigorous syllogistic reasoning. This may surprise many researchers.\\n\\n> The 'sketched proof' which is supposed to prove that transformers cannot do syllogistic reasoning also falls short: It assumes that they oversmooth, which only happens for transformers with many layers (the theoretical results are for the infinite-depth setting). If this happened consistently in practical transformer models, there is no chance LLMs could work as well as they do (as also Dovonon 2024 argues and shows, which is cited).\\n\\nLLMs indeed perform well with syllogistic reasoning but have not reached the rigour of syllogistic reasoning [1,2].\\n\\n[1] Tiwalayo Eisape, MH Tessler, Ishita Dasgupta, Fei Sha, Sjoerd van Steenkiste, and Tal Linzen. A systematic comparison of syllogistic reasoning in humans and language models. In NAACL, 2024\\n\\n[2] Andrew K Lampinen, Ishita Dasgupta, Stephanie C Y Chan, Hannah R Sheahan, Antonia Creswell, Dharshan Kumaran, James L McClelland, and Felix Hill. Language models, like humans, show content effects on reasoning tasks. PNAS Nexus, 3(7), 2024\\n\\n> Then the authors argue that different concept embeddings are needed, but do not compare (either theoretically or empirically) to the vector case, except for referring quickly to related work.\\n\\nWe do not compare, because this goes beyond the scope of this paper. Here, we mainly argue that traditional neural networks cannot reach the rigour of syllogistic reasoning (because most researchers may believe this task is easy and already solved by RNN or Transformers). Then, we theoretically prove a theorem that shows a necessary step to connect the recent work [1] that reaches the rigour of syllogistic reasoning. \\n\\n> What is the motivation for specifically studying this Siamese Masked Autoencoder model? \\n\\nWe show that recovering the whole by observing its parts is desirable for object recognition. This is approximately simulated by the Siamese Masked Autoencoder model. The same architecture is used in Euler Net for syllogistic reasoning. Euler Net demonstrated the desirable ability in object recognition (observing a single green circle and recognising a regular input with two circles), but this ability is not desirable for reasoning. \\n\\nWe try to convey the message that the object recognition component will cause problems for an end-to-end NN system for reasoning. \\n\\n> Line 357: \\\"We fed new randomly generated test data' How is this data different?\\n\\nThese new data were generated by randomly choosing the centres and the radii of two circles (as long as they are complete in the image). Two images in the original dataset of Euler Net were described in Section 5.1 in the supplementary material \\u2013 line 389 \\u2013 395 \\u2013 they were also randomly generated, but with different constraints. \\t\\n\\n> Line 359: What's the motivation for Euler Net version 2? The description of this method is extremely difficult to follow and incomplete. How does a model 'generate' input images?\\n\\nThe motivation of Euler Net version 2 is to explore the reasoning limit of Euler Net. The idea is to implement a wrapper system that can decide whether the output of Euler Net is correct, and if not, this wrapper system will create a new piece of training data for this error. The random input generator of the wrapper system chooses the centres and radii of two circles, and then it can compute the (expected) correct output of Euler Net. Using the set of automatically generated new training data, Euler Net is trained and evolves to Version 2. \\n\\n> 4.1, first paragraph. This lacks in details. Furthermore, it's well known that standard NNs are not adversarially robust. This connection is missing.\\n\\nWe show that Euler Net demonstrated good capability in object recognition and this caused problems for reasoning. For reasoning, a single green circle is OOD, but for object recognition, it is not \\u2013 it exists in the training data.\"}", "{\"comment\": \"We sincerely thank you for reading the revision and for raising the score. We made these changes by following your constructive comments and those of other reviewers (though it is an extensive revision). The fact that supervised deep learning cannot reach rigorous syllogistic reasoning is a bit harsh (also to us). However, knowing this fact on time and the right way to pursue neural reasoning will greatly help others and may save many social resources. If possible, please let us know what concerns have, and have not, been addressed in the revised version. Thanks again.\"}", "{\"comment\": \"> That's not a formal definition. What is the notion of convergence? What are the outputs? What is the process with which they converge?\\n\\nSyllogistic reasoning in the real world will be involved with objects with so complex shapes and colours, that we need to have multi-layered Slot Attention Transformers to recognise them (each object embedding is a Gaussian distribution N(&mu;, &delta;))[1]. Then, we feed these object embeddings into a neural component to reason syllogistic relations, e.g. part-whole relations.\\n\\nTransformers suffer from oversmoothing when their depth increases. Oversmoothing means that outputs converge to the same feature embedding [2-5]. In our setting, oversmoothing means that all &mu; and all &delta; are the same. Consider the sphere O with centre &mu; and the radius &delta; as the representative of an object embedding. We formally define oversmoothing as all spheres are the same. In the setting of our syllogistic reasoning, we have ''for any O_i, if O_i is part of O_1, then O_i and O_1 are coincided.'' We prove that &delta; = 0, which is equivalent to that all spheres (also Gaussian distributions) are degraded into a point. \\n\\n\\n[1] Locatello, Francesco, et al. \\\"Object-centric learning with slot attention.\\\" Advances in neural information processing systems 33 (2020): 11525-11538. \\n\\n[2] Namuk Park and Songkuk Kim. How do Vision Transformers Work? In ICLR 2022.\\n\\n[3] Pei hao Wang, Wen qing Zheng, Tian long Chen, and Zhang yang Wang. Anti-oversmoothing in deep vision transformers via the fourier domain analysis: From theory to practice. In ICLR 2022.\\n\\n[4] Xiao Jun Guo, Yi Fei Wang, Tian Qi Du, and Yi Sen Wang. Contranorm: A contrastive learning perspective on oversmoothing and beyond. In ICLR 2023\\n\\n[5] Gb\\u00e8tondji J-S Dovonon, Michael M. Bronstein, and Matt J. Kusner. Setting the record straight on transformer oversmoothing, 2024. https://arxiv.org/abs/2401.04301\"}", "{\"comment\": \"Dear Reviewer,\\n\\nYour continued feedback will be highly appreciated.\"}", "{\"comment\": \"Thanks for the suggestion.\\n\\nWe conducted an additional experiment to show that using combination table Euler Net (EN) cannot cover all valid types of syllogistic reasoning, and will add it to the supplementary material and publish the new dataset. \\n\\nWe created a new dataset that covers all 24 valid types of syllogistic reasoning, to test the performance of a well-trained Euler Net (99.8% accuracy on the benchmark dataset).\", \"this_dataset_is_created_as_follows\": \"We group 24 _valid_ syllogism types into 14 groups, as _no x are y_ has the same meaning with _no y are x_; and _some x are y_ has the same meaning with _some y are x_. For each group, we created 500 test cases by extracting hypernym relations from WordNet-3.0, each test case consisting of one true conclusion and one false conclusion, totalling 14000 syllogism reasoning tasks.\\nIn the hypernym structure, _elementary\\\\_particle.n.01_ is a descendent of _natural\\\\_object.n.01_ and _artifact.n.01_ is not a descendent of _natural\\\\_object.n.01_. So, we create the true syllogistic reasoning as: If _all elementary\\\\_particle.n.01 are natural\\\\_object.n.01_, _no artifact.n.01 are natural\\\\_object.n.01_, then _all elementary\\\\_particle.n.01 are not artifact.n.01_. The false syllogistic reasoning will be : If _all elementary\\\\_particle.n.01 are natural\\\\_object.n.01_, _no artifact.n.01 are natural\\\\_object.n.01_, then _some elementary\\\\_particle.n.01 are artifact.n.01_. \\n\\nWe use the pre-processing tool of EN to transform premises into coloured circles, and conclusions into vectors, respectively, and fed to EN. For 8 syllogistic structures, EN reaches 100\\\\% accuracy, namely, BARBARA, CELARENT, CESARE, DARAPTI, CALEMES, CAMESTRES, FELAPTON, and FESAPO. Accuracies of the rest 16 types range from $50\\\\\\\\%$ to $83.3\\\\\\\\%$. The overall accuracy is 76\\\\\\\\%. \\n\\\\begin{array}{|l|c|l|c|l|c|}\\n \\\\\\\\hline Valid\\\\ Type & Accuracy & Valid\\\\ Type& Accuracy& Valid\\\\ Type & Accuracy \\\\\\\\\\\\\\\\\\\\\\\\hline\\n BARBARA & 100\\\\\\\\% & BARBARI& 50\\\\\\\\% &BAROCO&66.7\\\\\\\\% \\\\\\\\\\\\\\\\\\\\\\\\hline \\n BAMALIP & 50\\\\\\\\% & BOCARDO& 75\\\\\\\\% &CALEMES &100\\\\\\\\% \\\\\\\\\\\\\\\\\\\\\\\\hline \\n CAMESTROS & 50\\\\\\\\% & CELARENT& 100\\\\\\\\% &CESARO &50\\\\\\\\% \\\\\\\\\\\\\\\\\\\\\\\\hline \\nCALEMO & 50\\\\\\\\% & CESARE& 100\\\\\\\\% & CELARONT&50\\\\\\\\% \\\\\\\\\\\\\\\\\\\\\\\\hline \\n DARAPTI & 100\\\\\\\\% & DARII& 75\\\\\\\\% &DISAMIS &75\\\\\\\\% \\\\\\\\\\\\\\\\\\\\\\\\hline \\nFESAPO & 100\\\\\\\\% & DATISI& 75\\\\\\\\% &DIMATIS&75\\\\\\\\% \\\\\\\\\\\\\\\\\\\\\\\\hline\\nFELAPTON & 100\\\\\\\\% &FERIO& 83.3\\\\\\\\% &FERISON &83.3\\\\\\\\% \\\\\\\\\\\\\\\\\\\\\\\\hline\\nCAMESTRES & 100\\\\\\\\% &FRESISON& 83.3\\\\\\\\% &FESTINO&83.3\\\\\\\\% \\\\\\\\\\\\\\\\\\\\\\\\hline \\nOverall&76\\\\\\\\% &&& \\\\\\\\\\\\\\\\\\\\\\\\hline \\n\\\\end{array}\\n\\nAll these valid types are explained in the supplementary material.\"}", "{\"comment\": \"> One-hot representation reduces the amount of training data, compared with using image representation.\\n\\nThank you for your answer, but this has nothing to do with logic soundness or validity. I don't think neural networks or statistical learning in general could guarantee soundness in logical reasoning, they are just probabilistic approximately correct.\\n\\n> We conducted an additional experiment to show that using combination table Euler Net (EN) cannot cover all valid types of syllogistic reasoning, and will add it to the supplementary material and publish the new dataset.\\n\\nThank you for providing the detailed results, it improves the readability of your paper.\\n\\nBut I think that Euler Network is a very special case in machine learning models. Your claim that it can learn one-hop part-of relational reasoning soundly (and in image representation) still lacks a theoretical guarantee.\"}", "{\"metareview\": \"The paper studies whether current neural networks can perform syllogistic reasoning via Euler diagrams. The results indicate they fail, and the authors argue that neural networks need to go beyond vector embeddings to solve rigorous reasoning. In my opinion, the reviewers have provided a detailed and thoughtful assessment, presenting strong arguments regarding the paper's suitability for ICLR in its current state: weak presentation, unclear proof that transformers cannot do syllogistic reasoning, and unclear design choice for the neural architecture used, among others. These downsides should be addressed before publication. Please note that the overall judgment should not be taken as a statement regarding the usefulness of your research.\", \"additional_comments_on_reviewer_discussion\": \"The discussion arose from problems and questions that had been raised in the reviews and also touched upon cognitive science aspects. One of the main topics was OOD and logical reasoning with neural networks. Some discussions were quite long but did not change the overall impression on the paper.\"}" ] }
4nU3BLG1ni
Multi-player Multi-armed Bandits with Delayed Feedback
[ "Jingqi Fan", "Zilong Wang", "Shuai Li", "Linghe Kong" ]
Multi-player multi-armed bandits have been researched for a long time due to their application in cognitive radio networks. In this setting, multiple players select arms at each time and instantly receive the feedback. Most research on this problem focuses on the content of the immediate feedback, whether it includes both the reward and collision information or the reward alone. However, delay is common in cognitive networks when users perform spectrum sensing. In this paper, we design an algorithm DDSE (Decentralized Delayed Successive Elimination) in multi-player multi-armed bandits with stochastic delay feedback and establish a regret bound. Compared with existing algorithms that fail to address this problem, our algorithm enables players to adapt to delayed feedback and avoid collision. We also derive a lower bound in centralized setting to prove the algorithm achieves near-optimal. Numerical experiments on both synthetic and real-world datasets validate the effectiveness of our algorithm.
[ "multi-player multi-armed bandits", "delayed feedback" ]
Reject
https://openreview.net/pdf?id=4nU3BLG1ni
https://openreview.net/forum?id=4nU3BLG1ni
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zmKec4V9T5", "wzGCAsT4Ld", "us1C1u4X6v", "rpzEAb30pU", "qcQoG2iWrM", "pD11JWIa85", "neMjpim4JT", "n9CH6TB7gE", "lvSpDTtN26", "lq9Ln648o4", "gsoDbZmQuA", "gQtbVhafOs", "fP1Vkh92TO", "f75wgpMhB8", "csEyUu07Qc", "cQfRHPzxsh", "bZxjiryErq", "bZ9in49EYS", "YG4RpsCtOt", "XFJCNLyN5y", "WWEXspekGJ", "Vd5kmCUKuq", "VOOhE1P1GF", "VHvx9ucVDT", "UFObYDd1Rn", "RkCg5MsHne", "PJFs4XBTR6", "NSpApcPgi6", "Mgb0fceYCx", "IbSEM8yoo6", "Fa886P2fMd", "FGbsJRupLT", "ETiz8PIDVX", "Cp9UVWZrQu", "7y3jWKlvLj", "7VEBRXuHqd", "7SeJKofo6Q", "5BdSlm6SvE", "2p22d361YE" ], "note_type": [ "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1730491249918, 1729236082359, 1732281314614, 1732595377238, 1732280132537, 1730714876695, 1732279259925, 1732625226736, 1732711357100, 1732500108502, 1732268718985, 1732578028939, 1730628433407, 1732468791520, 1732626252462, 1732541103898, 1732279411425, 1732280309414, 1732268615338, 1733231761583, 1732271057584, 1732281392218, 1732550007756, 1732594779274, 1732281113837, 1734700357210, 1730689112601, 1732492891517, 1732280870068, 1732278608906, 1733231498393, 1732270925295, 1737523484786, 1732548886190, 1732278178577, 1732278374131, 1732278873231, 1733231797743, 1732279659636 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission2083/Reviewer_ix9S" ], [ "ICLR.cc/2025/Conference/Submission2083/Reviewer_j5FD" ], [ "ICLR.cc/2025/Conference/Submission2083/Authors" ], [ "ICLR.cc/2025/Conference/Submission2083/Authors" ], [ "ICLR.cc/2025/Conference/Submission2083/Authors" ], [ "ICLR.cc/2025/Conference/Submission2083/Reviewer_fYEh" ], [ "ICLR.cc/2025/Conference/Submission2083/Authors" ], [ "ICLR.cc/2025/Conference/Submission2083/Reviewer_j5FD" ], [ "ICLR.cc/2025/Conference/Submission2083/Reviewer_ix9S" ], [ "ICLR.cc/2025/Conference/Submission2083/Authors" ], [ "ICLR.cc/2025/Conference/Submission2083/Authors" ], [ "ICLR.cc/2025/Conference/Submission2083/Reviewer_N47W" ], [ "ICLR.cc/2025/Conference/Submission2083/Reviewer_N47W" ], [ "ICLR.cc/2025/Conference/Submission2083/Reviewer_N47W" ], [ "ICLR.cc/2025/Conference/Submission2083/Reviewer_fYEh" ], [ "ICLR.cc/2025/Conference/Submission2083/Reviewer_ix9S" ], [ "ICLR.cc/2025/Conference/Submission2083/Authors" ], [ "ICLR.cc/2025/Conference/Submission2083/Authors" ], [ "ICLR.cc/2025/Conference/Submission2083/Authors" ], [ "ICLR.cc/2025/Conference/Submission2083/Authors" ], [ "ICLR.cc/2025/Conference/Submission2083/Authors" ], [ "ICLR.cc/2025/Conference/Submission2083/Authors" ], [ "ICLR.cc/2025/Conference/Submission2083/Reviewer_ix9S" ], [ "ICLR.cc/2025/Conference/Submission2083/Authors" ], [ "ICLR.cc/2025/Conference/Submission2083/Authors" ], [ "ICLR.cc/2025/Conference/Submission2083/Area_Chair_Xo4z" ], [ "ICLR.cc/2025/Conference/Submission2083/Reviewer_WghT" ], [ "ICLR.cc/2025/Conference/Submission2083/Authors" ], [ "ICLR.cc/2025/Conference/Submission2083/Authors" ], [ "ICLR.cc/2025/Conference/Submission2083/Authors" ], [ "ICLR.cc/2025/Conference/Submission2083/Authors" ], [ "ICLR.cc/2025/Conference/Submission2083/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission2083/Authors" ], [ "ICLR.cc/2025/Conference/Submission2083/Authors" ], [ "ICLR.cc/2025/Conference/Submission2083/Authors" ], [ "ICLR.cc/2025/Conference/Submission2083/Authors" ], [ "ICLR.cc/2025/Conference/Submission2083/Authors" ], [ "ICLR.cc/2025/Conference/Submission2083/Authors" ] ], "structured_content_str": [ "{\"summary\": \"This paper considers a multi-arm multi-player bandit setup with delayed reward. The paper proposes novel algorithms to counter the delay in receiving the reward. The paper bounds the regret in the decentralized setting.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. Bounding the regret in the multi-armed multi-agent bandit setup is challenging. The paper additionally consider the delay, hence, the contribution seems to be significant.\\n\\n2. The paper achieves the regret bound. \\n\\n3. Empirical results show the efficacy of the proposed approach.\", \"post_rebuttal_edit\": \"Addressing the delay parameter in the multi-armed collision model is important. Thus, I am happy to accept this paper.\", \"weaknesses\": \"1. The paper considers cognitive radio setup. However, cognitive radio is hardly used in practice, it is only of academic interest. Can the paper provide any other relevant examples?\\n\\n2. The paper is very hard to read, hence, the contributions are obscure.\", \"questions\": \"1. Can the authors highlight the main technical challenges? Delay in the multi-armed setting is considered, while the reviewer agrees that the collision model does complicate things, in the technical level, how the analysis will be different is not clear.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper studies multi-player multi-armed bandit (MMAB) problem with delayed feedback, motivated by the application of cognitive radio networks. Unlike previous MMAB problems that assume instantaneous feedback from arms, this work tackles the challenge posed by delayed feedback. To overcome this challenge, this work proposes a decentralized delayed successive elimination (DDSE) algorithm, which operates in three stages: exploration, communication, and exploitation. The proposed DDSE algorithm enables players to adapt to delayed feedback and avoid collision. This work theoretically analyzes the upper bound of regret for the DDSE algorithm and further compares the regret with two benchmark cases: DDSE without delay estimation and centralized lower bound. By comparison, it shows that the DDSE achieves a near-optimal regret bound.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1.\\tThe problem of MMAB with delayed feedback is well-motivated and highly relevant to real-world applications.\\n2.\\tIntroducing delayed feedback significantly increases the complexity of the already challenging MAB problem. The authors effectively decompose the regret to handle this complexity and present solid theoretical results.\\n3.\\tThe paper is well-written and easy to follow.\", \"weaknesses\": \"1.\\tMy main concern lies on the ID allocation for the leader-follower structure in the DDSE algorithm. If the central planner can assign an ID to each player, this DDSE algorithm is no longer fully decentralized. In many cognitive radio networks, sensing nodes are dynamic, and some nodes are even hidden or unknown to the network operator.\\n2.\\tThe communication assumption weakens the solution in this work. \\n3.\\tI suggest the authors to move Subsection 5.3 ahead of Subsection 5.1 for better logic, as the centralized lower bound serve as the benchmark.\\n4.\\tIn the experiments, the number of players $M$ is a relatively small compared to typical application of cognitive radio networks.\\n5.\\tIn the experiments, the authors simply compare DDSE with two methods that do not account for delay. This comparison may be somewhat unfair. If there is no other available algorithm, it would be better to compare DDSE with the benchmark centralized algorithm.\", \"questions\": \"1.\\tIn cognitive radio networks, players are usually dynamic. Additionally, there are some hidden nodes (players) that are unknown to each other. In this case, will the DDSE algorithm still work?\\n2.\\tIf a player $j$ is waiting for the feedback from arm $k$ (i.e., $t<s+d_s^j$) and another player $l$ pulls this arm $k$, will there be a collision? If a collision occurs, will player $j$ fail to obtain a reward from arm $k$ after waiting $t-s$ time slots?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer j5FD (Part 3)\", \"comment\": \"# Response to Weakness 5:\\n*Q: In the experiments, the authors simply compare DDSE with two methods that do not account for delay. This comparison may be somewhat unfair. If there is no other available algorithm, it would be better to compare DDSE with the benchmark centralized algorithm.*\\n\\nAs Reviewer N47W suggests, we have compared our algorithms with [5, 6, 7]. All the experimental results show that our algorithms perform best. We also compare DDSE in both decentralized and centralized settings. Experimental results in Figure 10 show that the performance of DDSE closely matches that in the centralized setting.\\n\\n# Response to Question 1:\\n*Q: In cognitive radio networks, players are usually dynamic. Additionally, there are some hidden nodes (players) that are unknown to each other. In this case, will the DDSE algorithm still work?*\\n\\n## Dynamic Players:\\nIf players are dynamic, the setting is called asynchronous multi-player bandits. In this setting, players can enter the game and leave at any time. The latest study on asynchronous MMAB is [8]. However, [8] relies on several assumptions:\\n\\n1. There exists a centralized environment, where players can freely communicate with others.\\n\\n2. Although players enter the game at different times, they leave at the same time $T$.\\n\\n3. At each time, the probability that every player enters the game is the same and known.\\n\\nThe area of centralized asynchronous MMAB without assumptions is still blank, let alone decentralized asynchronous MMAB. In decentralized bandits, delay has already significantly complicated the problem because players rely on the feedback of collisions to get some information from other players. Incomplete feedback causes inconsistency between players, leading to staggered exploration, frequent collisions, or premature exploitation.\\n\\n## Hidden Players:\\nAs stated in \\\"Response to Weakness 1\\\", if players have no prior knowledge about others, an initialization phase can be introduced. Players start with an initialization phase where, despite incomplete feedback, they begin to explore arms. If a collision occurs during exploration that should not have happened, the affected player notifies others by sending a collision signal at a specific time. After a period of delay, players receive the signal and re-enter the initialization phase, followed by the next round of exploration. By adopting this approach, it is possible to provide probabilistic guarantees under our sub-Gaussian delay assumption. However, this would complicate the notation significantly. The problem addressed in this work is already sufficiently challenging, as our study is the first to tackle delays in multi-player bandits where collisions occur when players select the same arm. Our goal is to pave the way for further study in this area. A fully decentralized MMAB can be discussed in future work.\\n\\n# Response to Question 2:\\n*Q: If a player\\u00a0$j$\\u00a0is waiting for the feedback from arm\\u00a0$k$\\u00a0(i.e.,\\u00a0$t<s+d _ s^j$) and another player\\u00a0$l$\\u00a0pulls this arm\\u00a0$k$, will there be a collision? If a collision occurs, will player\\u00a0$j$\\u00a0fail to obtain a reward from arm\\u00a0$k$\\u00a0after waiting\\u00a0$t\\u2212s$\\u00a0time slots?*\\n\\nIn our paper, players selecting the same arm at the same time is called \\\"collision\\\". A player $j$ selects arm $k$ at time $s$. Another player $\\\\ell$ also selects arm $k$ at time $s$. Then we call that the two players collide with each other, but they do not receive the feedback immediately. The delay $d _ s^j$ and $d _ s^{\\\\ell}$ do not have to be the same. Player $j$ and player $\\\\ell$ will receive their feedback that they have collided at time $s$ in different time slots. When the feedback from time $s$ has not been received, player\\u00a0$j$\\u00a0can wait for the feedback from arm\\u00a0$k$\\u00a0(i.e.,\\u00a0$t<s+d _ s^j$) while selecting other arms. \\\"Waiting\\\" does not mean that the player is idle. If another player $\\\\ell$ selects arm $k$ also at time $s$, then both player $j$ and $\\\\ell$ will receive collisions. If the player $\\\\ell$ selects arm $k$ at $t^{\\\\prime}$ (We define $s \\\\leq t^{\\\\prime} \\\\leq s+d _ s^j$. ), player $j$ will not receive a collision at time $s+d _ s^j$, because at time $s$, only $j$ selects the arm $k$.\"}", "{\"comment\": \"We truly appreciate your thoughtful questions. It has allowed us to clarify and elaborate on key aspects of our results.\\n# Response to Question 1\\nYes, it is expected regret. The regret is defined as:\\n\\n$$\", \"r___t\": \"= T\\\\sum _ {j\\\\in[M]}\\\\mu _ {(j)} - \\\\mathbb{E}\\\\left[ \\\\sum _ {t=1}^{T} \\\\sum _ {j\\\\in[M]} r^j(t) \\\\right],\\n$$\\n\\nwhere the expectation is taken over the randomness of the rewards.\\n\\n# Response to Question 2\\nThe centralized lower bound is provided in Theorem 1. We recall that the lower bound is:\\n\\n$$\\nR _ T \\\\geq \\\\underbrace{\\\\sum _ {k>M}\\\\frac{(1-o(1))\\\\log(T)}{2\\\\theta\\\\Delta _ k}} _ {\\\\mathrm{term\\\\ I}} + \\\\underbrace{\\\\left(\\\\mathbb{E}[d] - \\\\sigma _ d\\\\sqrt{\\\\frac{\\\\theta}{1-\\\\theta}}\\\\right) \\\\frac{M}{2K}\\\\sum _ {k>M}\\\\Delta _ k - \\\\frac{2}{\\\\theta}} _ {\\\\mathrm{term\\\\ II}} , \\\\tag*{(1)}\\n$$\\n\\nwhere $\\\\mathbb{E}[d]$ is the expectation of a delay distribution and $\\\\sigma _ d^2$ is the sub-Gaussian parameter. Define $d(\\\\theta):=\\\\min\\\\{ \\\\gamma\\\\in\\\\mathbb{N}|P(d \\\\leq \\\\gamma) \\\\geq \\\\theta \\\\}$ as the quantile function of the delay distribution, so $\\\\theta$ in the lower bound is a quantile. \\n\\nThe regret bound of DDSE is in Theorem 2. We also rewrite here:\\n$$\\n\\\\begin{aligned}\\n\\tR _ {T} \\\\leq &\\\\sum _ {k>M}\\\\frac{323\\\\log(T)}{\\\\theta \\\\Delta _ k} + \\\\left(9 +\\\\frac{2M\\\\sum _ {k>M}\\\\Delta _ k}{K-M}\\\\right)\\\\mathbb{E}[d] + \\\\sigma _ d \\\\left(3\\\\sqrt{6} + 6\\\\sqrt{2\\\\log(\\\\frac{1}{1-\\\\theta})}\\\\right) \\\\\\\\\\\\\\\\\\n\\t&+ \\\\frac{\\\\sigma _ d M}{K-M}\\\\sum _ {k>M}\\\\Delta _ k\\\\sqrt{\\\\log\\\\left((M-1)(K+2M)\\\\right)} + C_1, \\\\\\\\\\\\\\\\\\n\\t\\\\leq & \\\\underbrace{\\\\sum _ {k>M}\\\\frac{323\\\\log(T)}{\\\\theta \\\\Delta_k}} _ {\\\\mathrm{term\\\\ A}} + \\\\underbrace{\\\\left( 2\\\\mathbb{E}[d]+\\\\sigma _ d\\\\sqrt{3\\\\log(K)} \\\\right)\\\\frac{M}{K-M}\\\\sum _ {k>M}\\\\Delta _ k} _ {\\\\mathrm{term\\\\ B}} + \\\\underbrace{\\\\left( 9\\\\mathbb{E}[d]+6\\\\sqrt{2\\\\log(\\\\frac{1}{1-\\\\theta})} \\\\right)} _ {\\\\mathrm{term\\\\ C}} \\\\\\\\\\\\\\\\ \\n\\t& + \\\\underbrace{3\\\\sqrt{6}\\\\sigma_d} _ {\\\\mathrm{term\\\\ D}} + \\\\underbrace{C _ 1} _ {\\\\mathrm{term\\\\ E}} \\\\quad \\\\quad \\\\quad \\\\quad \\\\quad \\\\quad \\\\quad \\\\quad \\\\quad \\\\quad \\\\quad \\\\quad \\\\quad \\\\quad \\\\quad \\\\quad \\\\quad \\\\quad \\\\quad \\\\quad \\\\quad \\\\quad \\\\quad \\\\quad \\\\quad \\\\quad \\\\quad \\\\quad (2)\\n\\\\end{aligned}\\n$$\\nwhere $C _ 1= \\\\sum _ {k>M}\\\\frac{195}{\\\\theta\\\\Delta _ k^2} + \\\\frac{4Me^{-\\\\delta^2/2}}{\\\\delta^2}$.\\n\\nOnly the first terms in (1) and (2) are related to $T$. Term A is aligned with term I up to constant factors. Term E arises due to the decentralized environment and is not related to delay. Regarding delay, a comparison of term II with terms B, C, and D reveals that the difference on $K$ and $M$ is only $O(\\\\frac{1}{1-M/K})\\\\sqrt{\\\\log(K)}$. This indicates that the regret caused by delay does not increase rapidly as $K$ and $M$ increase. \\n\\nWe hope these clarifications address your questions and provide further insight into the results. Please feel free to reach out if you have any additional questions.\"}", "{\"title\": \"Response to Reviewer ix9S (Part 2)\", \"comment\": \"**1. The first block is used for removing an arm from $\\\\mathcal{M}^j _ p$ .**\\n\\nThe leader first finds the bad arm $a^- _ p$ in $\\\\mathcal{M}^j _ p$ and identifies the position of $a^- _ p$ . Then the leader selects the $i _ {a^- _ p}$ -th arm in $\\\\mathcal{M}^j _ {p-q _ j}$ for $M$ times. Followers still select arms in $\\\\mathcal{M}^j _ {p-q _ j}$ in a round-robin way. After a collision occurs, followers save the position that a collision happens. Then when she gets both the position of the arm to be removed and the arm to be added, she will update the best arm set. \\nIn the example, players in phase $2$ (i.e., $p=2$) but they still use $\\\\mathcal{M} _ 1^j$. $t _ 1$ to $t _ 4$ denotes the rounds in the first block. We suppose that the leader wants to remove arm $1$ from $\\\\mathcal{M} _ 2^4$. The arm selection is in the table.\\n\\n| | $t _ 1$ | $t _ 2$ | $t _ 3$ | $t _ 4$ |\\n| ---------- | ----- | ----- | ----- | ----- |\\n| Leader | 3 | 3 | 3 | 3 |\\n| Follower 1 | 4 | 2 | 6 | 3 |\\n| Follower 2 | 2 | 6 | 3 | 4 |\\n| Follower 3 | 6 | 3 | 4 | 2 |\\n\\nEach follower collides on arm $3$ once. They remember the position of collision in $p=2$ is $4$. \\n\\n**2. The second block is used for adding an arm from $\\\\mathcal{M}^j _ p$.**\\n\\nThe leader finds a better arm that has higher empirical rewards but not in $\\\\mathcal{M}^j _ p$. Then she wants to pass the new arm to followers. Because the new arm $a^+ _ p$ might not be in $\\\\mathcal{M}^j _ p$ or $\\\\mathcal{M}^j _ {p-q _ j}$, we can not let followers receive the collision information via the best arm set. Thus, we utilize the whole arm set. The leader continuously selects $a^+ _ p$ for $K$ times. Followers select all arms in a round-robin way. This block continues $K$ times because the length of the original arm is $K$ and we hope to use the original whole arm set to pass the information.\\nIn the example, $t _ 5$ to $t _ 6$ denotes the rounds in the second block. We suppose that the leader wants to add arm $1$.\\n\\n| | $t _ 5$ | $t _ 6$ | $t _ 7$ | $t _ 8$ | $t _ 9$ | $t _ {10}$ |\\n| ---------- | ----- | ----- | ----- | ----- | ----- | -------- |\\n| Leader | 5 | 5 | 5 | 5 | 5 | 5 |\\n| Follower 1 | 1 | 2 | 3 | 4 | 5 | 6 |\\n| Follower 2 | 2 | 3 | 4 | 5 | 6 | 1 |\\n| Follower 3 | 3 | 4 | 5 | 6 | 1 | 2 |\\n\\nEach follower collides on arm $5$ once. They remember that the collision happens on arm $5$. \\n\\n**3. The third block is used for passing the ending of exploration.**\\n\\nLeader explores and gradually eliminates all sub-optimal arms from $[K]$. However, as for followers, they do not know when the exploration of the leader ends. If the leader's exploration has ended and passes all optimal arms to followers, continuing to enter the communication phase is a waste of time for followers. Therefore, a block used for passing the ending of exploration is necessary. Players' actions are similar to the first block.\\n\\n**4. After a period of delay:**\\n\\nNote that the collisions in the first and second blocks can not be received immediately. After a period of delay, follower $j$ receives the position $4$ and the arm to be added is $5$. She also knows the feedback is from phase $2$. Therefore, she updates:\\n$$\\n\\\\mathcal{M}_1^j = \\\\\\\\{4, 2, 6, 3\\\\\\\\} \\\\rightarrow \\\\mathrm{replace\\\\ the\\\\ arm\\\\ in\\\\ position\\\\ 4\\\\ to\\\\ arm\\\\ 5} \\\\rightarrow\\\\mathcal{M}^j_2 = \\\\\\\\{4, 2, 6, 5\\\\\\\\} \\n$$\\nTherefore, although players use the previous arm set $\\\\mathcal{M} _ 1^j$, followers can still receive the correct update information of $\\\\mathcal{M} _ 2^j$ even though they do not know $\\\\mathcal{M} _ 2^j$.\"}", "{\"summary\": \"In this paper, the authors have considered the delayed feedback setting in multi-player multi-armed bandit problem, motivated by cognitive radio applications. A decentralized delayed successive elimination (DDSE) algorithm which takes into account stochastic delay, is proposed in the paper, and a regret bound is established. Contrary to existing algorithms, the proposed algorithm can avoid collision by adapting to delayed feedback. A corresponding lower bound on the regret is also derived. Experiment results are presented to demonstrate the efficacy of the proposed algorithm.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The considered problem is well-motivated and the analysis appears to be sound.\", \"weaknesses\": \"The proposed algorithm takes a leader-follower approach which makes it semi-distributed in nature as there is a necessity of communication between the leader and the followers. The authors have considered the fixed user setting where no users are allowed to enter or leave the systems. The modeling of delay could have been better.\", \"questions\": \"My concerns are as follows:\\n1.\\tThe proposed algorithm takes a leader-follower approach which makes it semi-distributed in nature as there is a necessity of communication between the leader and the followers. There are works in the literature which can work without the requirement of a leader, e.g., \\nTrinh, Cindy, and Richard Combes. \\\"A High Performance, Low Complexity Algorithm for Multi-Player Bandits Without Collision Sensing Information.\\\" arXiv preprint arXiv:2102.10200 (2021).\\n2.\\tWhat is the rationale behind Assumption 1? What are the components of delay? For example, does it contain queueing delay? How practical is the consideration of sub-Gaussian delay?\\n3.\\tThe authors have considered the fixed user setting where no users are allowed to enter or leave the systems. However, in a practical cognitive radio application, users may enter or leave the system. How does the proposed algorithm behave when user entering and leaving are allowed in the system?\\n4.\\tIt is not clear why there is a provision of eliminating arms for which LCB is bigger than UCB. Please specify the motivation behind the virtual communication phase in details.\\n5.\\tPlease provide a pointer to the result where an upper bound on the feedback delay is derived. This result has been used in Lemma 1. \\n6.\\tCan the authors quantify the gap between the lower bound and upper bound on the regret of the proposed algorithm? It will be more justified to call the proposed algorithm near-optimal then. \\n7.\\tSince the paper is highly motivated by cognitive radio applications, I expected some real wireless networks simulations (such as ns-3 simulations) where delays will be real delays in a wireless network.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer N47W (Part 2)\", \"comment\": \"# Response to Question 4:\\n*Q: How are collisions interpreted in the communication phase? Is it binary signaling?*\\n\\nThe collisions are just collisions that represent that at least two players select the same arm at the time. We design the communication phase to handle the difficulty that players can not directly communicate with others. Although there is no way to pass the information directly under the environment, our algorithms build a specific framework for players to pass information in such an environment. Via the algorithm, collisions can be interpreted as some information by players. \\n\\nNext, we explain our communication phase here. The primary goal of communication phase is to pass the update of $\\\\mathcal{M} _ p^j$. Each communication phase is composed of three blocks. We consider $M=4$ and $K=6$. The example is:\\n\\n\\\\begin{aligned} \\n\\\\mathrm{Leader:} \\\\ &\\\\mathcal{M}^4_1 = \\\\\\\\{4, 2, 6, 3\\\\\\\\}, \\\\mathcal{M}^4_2 = \\\\\\\\{4, 2, 6, 1\\\\\\\\}, \\\\mathcal{M}^4_3 = \\\\\\\\{4, 2, 6, 5\\\\\\\\} \\\\\\\\\\\\\\\\ \\n\\\\mathrm{Follower\\\\ 1:} \\\\ &\\\\mathcal{M}^1_1 = \\\\\\\\{4, 2, 6, 3\\\\\\\\} \\\\\\\\\\\\\\\\\\n\\\\mathrm{Follower\\\\ 2:} \\\\ &\\\\mathcal{M}^2_1 = \\\\\\\\{4, 2, 6, 3\\\\\\\\} \\\\\\\\\\\\\\\\\\n\\\\mathrm{Follower\\\\ 3:} \\\\ &\\\\mathcal{M}^3_1 = \\\\\\\\{4, 2, 6, 3\\\\\\\\}\\n\\\\end{aligned}\\n\\n## The first block is used for removing an arm from $\\\\mathcal{M}^j _ p$ . \\nThe leader first finds the bad arm $a^- _ p$ in $\\\\mathcal{M}^j _ p$ and identifies the position of $a^- _ p$ . Then the leader selects the $i_{a^- _ p}$ -th arm in $\\\\mathcal{M}^j _ {p-q _ j}$ for $M$ times. Followers still select arms in $\\\\mathcal{M}^j _ {p-q _ j}$ in a round-robin way. After a collision occurs, followers save the position that a collision happens. Then when she gets both the position of the arm to be removed and the arm to be added, she will update the best arm set. \\nIn the example, players in phase $2$ (i.e., $p=2$) but they still use $\\\\mathcal{M} _ 1^j$. $t _ 1$ to $t _ 4$ denotes the rounds in the first block. We suppose that the leader wants to remove arm $1$ from $\\\\mathcal{M} _ 2^4$. The arm selection is in the table.\\n\\n| | $t _ 1$ | $t _ 2$ | $t _ 3$ | $t _ 4$ |\\n| ---------- | ----- | ----- | ----- | ----- |\\n| Leader | 3 | 3 | 3 | 3 |\\n| Follower 1 | 4 | 2 | 6 | 3 |\\n| Follower 2 | 2 | 6 | 3 | 4 |\\n| Follower 3 | 6 | 3 | 4 | 2 |\\n\\nEach follower collides on arm $3$ once. They remember the position of collision in $p=2$ is $4$. \\n## The second block is used for adding an arm from $\\\\mathcal{M}^j _ p$.\\nThe leader finds a better arm that has higher empirical rewards but not in $\\\\mathcal{M}^j _ p$. Then she wants to pass the new arm to followers. Because the new arm $a^+ _ p$ might not be in $\\\\mathcal{M}^j _ p$ or $\\\\mathcal{M}^j _ {p-q _ j}$, we can not let followers receive the collision information via the best arm set. Thus, we utilize the whole arm set. The leader continuously selects $a^+ _ p$ for $K$ times. Followers select all arms in a round-robin way. This block continues $K$ times because the length of the original arm is $K$ and we hope to use the original whole arm set to pass the information.\\n\\nIn the example, $t _ 5$ to $t _ 6$ denotes the rounds in the second block. We suppose that the leader wants to add arm $1$.\\n\\n| | $t_5$ | $t_6$ | $t_7$ | $t_8$ | $t_9$ | $t_{10}$ |\\n| ---------- | ----- | ----- | ----- | ----- | ----- | -------- |\\n| Leader | 5 | 5 | 5 | 5 | 5 | 5 |\\n| Follower 1 | 1 | 2 | 3 | 4 | 5 | 6 |\\n| Follower 2 | 2 | 3 | 4 | 5 | 6 | 1 |\\n| Follower 3 | 3 | 4 | 5 | 6 | 1 | 2 |\\n\\nEach follower collides on arm $5$ once. They remember that the collision occurred on arm 5.\\n## The third block is used for passing the ending of exploration.\\nLeader explores and gradually eliminates all sub-optimal arms from $[K]$. However, as for followers, they do not know when the exploration of the leader ends. If the leader's exploration has ended and passes all optimal arms to followers, continuing to enter the communication phase is a waste of time for followers. Therefore, a block used for passing the ending of exploration is necessary. Players' actions are similar to the first block.\\n## After a period of delay:\\nNote that the collisions in the first and second blocks can not be received immediately. After a period of delay, follower $j$ receives the position $4$ and the arm to be added is $5$. She also knows the feedback is from phase $2$. Therefore, she updates:\\n$$\\n\\\\mathcal{M}_1^j = \\\\\\\\{4, 2, 6, 3\\\\\\\\} \\\\rightarrow \\\\mathrm{replace\\\\ the\\\\ arm\\\\ in\\\\ position\\\\ 4\\\\ to\\\\ arm\\\\ 5} \\\\rightarrow\\\\mathcal{M}^j_2 = \\\\\\\\{4, 2, 6, 5\\\\\\\\} \\n$$\\nNo matter how late the feedback is received by the follower, she can update the correct $\\\\mathcal{M} _ p^j$.\"}", "{\"comment\": \"I thank the authors' response. However, my primary concern remains unresolved. Specifically, in practice, ID allocation by the leader is unrealistic since not all nodes are known in advance. Furthermore, this ID allocation significantly undermines the contribution of the distributed algorithm design.\"}", "{\"comment\": \"Thanks for answering my questions. However, it does not answer my question 1. Perhaps, I should have clarified before. I get the definition of regret which is external regret. My question was whether the bound achieved in Theorem 2 was achieved with high probability. I mean, can we say that with probability $1-\\\\delta$, $R_T$ is upper bounded by...?\"}", "{\"title\": \"Official Comment by Authors\", \"comment\": \"We thank you once again for your careful reading of our paper and your constructive comments and suggestions. We will appreciate it very much if you could let us know whether all your concerns are addressed. We are also more than happy to answer any further questions in the remaining discussion period.\"}", "{\"title\": \"Response to Reviewer fYEh (Part 2)\", \"comment\": \"# Response to Question 2:\\n*Q: What is the rationale behind Assumption 1? What are the components of delay? For example, does it contain queueing delay? How practical is the consideration of sub-Gaussian delay?*\\n## Rationale behind Assumption 1\\nThe rationale behind Assumption 1 lies in providing a realistic yet mathematically tractable model for network delays in multi-player bandit problems. In real-world networks, transmission delays are inherently bounded by physical and protocol limits, which prevent excessively large values. A common assumption on delay is that all delay is bounded by a fixed value $d_{\\\\max}$. However, we choose not to rely on this assumption. Instead, we propose a more flexible sub-Gaussian assumption that permits larger delays but with a low probability of occurrence. It is worth noticing that our assumption is more general than bounded delay, as all bounded random variables are inherently sub-Gaussian. In our analysis, we leverage the property of exponential decay in the tail probabilities characteristic of sub-Gaussian distributions.\\n## Components of delay\\nIn cognitive radio networks, delay typically consists of several components, depending on the communication process and network architecture. These components can include:\\n1. Propagation Delay: The time required for a signal to travel from the transmitter to the receiver across the communication medium.\\n2. Transmission Delay: The time taken to push all the bits of a packet onto the transmission medium. It depends on the packet size and the bandwidth of the link.\\n3. Processing Delay [5]: The time spent on processing tasks, such as spectrum sensing, packet routing, and other computational operations required before transmission.\\n4. Queueing Delay [6, 7]: The time an SU's data packet spends in a queue, either at the SU\\u2019s transmitter or at network devices (e.g., routers or base stations), waiting to be transmitted. Queueing delay occurs due to competition among SUs for limited resources, such as unoccupied PU channels or transmission opportunities.\\n\\nThus, queueing delay is an important and common component of the overall delay in cognitive radio networks. \\n## How practical is our assumption\\nAs stated before, sub-Gaussian delay is practical and well-suited to the dynamic nature of cognitive radio networks. Sub-Gaussian delay models are characterized by exponential tail decay, which reflects the typical behavior of delays in real-world systems. Most delays are relatively small, but occasionally, larger delays can occur with diminishing probability. This is particularly relevant for cognitive radio networks, where delays may arise from dynamic spectrum access or contention among secondary users for idle channels. Moreover, our assumption is a generalization of bounded delay. Compared with [8, 9, 10], sub-Gaussian delay allows for rare occurrences of larger delays while maintaining tractability. This makes it more flexible and realistic for cognitive radio networks.\\n\\n# Response to Question 3:\\n*Q: Concerns about fixed users in the system.*\\n\\nIf players are not fixed, the setting is called asynchronous multi-player bandits. In this setting, players can enter the game and leave at any time. The latest study on asynchronous MMAB is [11]. However, [11] relies on several assumptions:\\n1. There exists a centralized environment, where players can freely communicate with others.\\n2. Although players enter the game at different times, they leave at the same time $T$.\\n3. At each time, the probability that every player enters the game is the same and known.\\n\\nThe area of centralized asynchronous MMAB without assumptions is still blank, let alone decentralized asynchronous MMAB. \\nIn decentralized bandits, the delay has already significantly complicated the problem because players rely on the feedback of collisions to get some information from other players. Incomplete feedback causes inconsistency between players, leading to staggered exploration, frequent collisions, or premature exploitation. We expect to discuss it in the future work.\"}", "{\"title\": \"communication by collisions\", \"comment\": \"The algorithm in [2] does have a communication phase, which is referred to as the signaling phase. In this phase, the players share their reward information with others through collisions so that everyone has the same reward information at the end of this pahse. Hence the statement \\\"[1, 2, 3] because they do not have the communication phase\\\" is incorrect.\"}", "{\"summary\": \"The paper studies the multi-player, multi-armed bandit problem. The difference from the studies is that the authors allow the feedback to be received with a random delay.\\nThe authors develop an algorithm named DDSE and upper bound its performance. They establish that the algorithm is near optimal by deriving a lower bound.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"The paper provides a detailed analysis of the algorithms and establishes a low bound. However, I could not verify all the claims due to presentation issues.\", \"weaknesses\": \"The authors consider the multi-player multi-armed bandit problem with a leader-follower structure. Several authors explore this problem. The new dimension of delayed feedback is a minor extension. In addition, I have concerns about the following aspects:\\n\\n1. The literature review is not detailed: Several papers consider multi-player bandits with a more general heterogenous reward structure, which is well suited for cognitive radio networks. \\n2. The algorithm is hard to understand: (see details below)\\n3. The experiments section is weak: Why only compare with SIC-MMAB and not with other algorithms like Game-of-Thrones and Explore-Signal-Exploit Repeat\", \"questions\": \"It is hard to understand the DDSE algorithm.\\n\\n1. What is the duration of exploration, communication, and exploitation?\\n2. Line 190 says, \\\"the best empirical set arm set of player j.\\\" How is this set defined?\\n3. Line 204: \\\"To avoid collision with followers and ensure sufficient exploration, the leader first sequentially hops in the set of best empirical arms with followers.\\\" How is it ensured that the best empirical arm of leader and follower do not overlap? How is the collision avoided?\\n4. How are collisions interpreted in the communication phase? Is it binary signaling?\\n\\n\\nIn the experiment section, why are the algorithms in any of the following papers not considered?\\n1. http://proceedings.mlr.press/v83/besson18a/besson18a.pdf\\n2. http://papers.neurips.cc/paper/7952-distributed-multi-player-bandits-a-game-of-thrones-approach.pdf\\n3. https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=8737653\", \"minor_issues\": \"1. Line 209: s<T or s<t?\\n2. Is there any difference between sequential hopping and round-robin?\\n3. Notation say [n]={1,2,..,n}. Then why |[K]| is M, not K?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Concerns with experiments\", \"comment\": \"The updates in the experimental section do not convince me. The statement \\\"More importantly, none of these algorithms are involved in simulating communication by collisions\\\" is not true. I will keep my current score.\"}", "{\"comment\": \"I thank the authors for their detailed response. Since queuing delay could be arbitrarily large, does the assumption on delays (SubGaussian and generation of bounded delay) still hold?\"}", "{\"title\": \"Thank you\", \"comment\": \"I thank the authors for answering my questions. I have some follow-up questions--\\n\\n1. How do the agents select the leader? Is there any communication signal sent to agree on the leader?\\n\\n2. In Algorithm 1, line 3, it says `explore', how does the leader explore here? Is it like uniformly picking any of the arms? \\n\\n3. Since it is a collision model and the rewards may be different for different players, how do they select the optimal set of arms? I mean consider the following scenario where players 1 and 2 have the best reward for arm 1, but poor rewards for arm 2, then what should be the solution to it? Now if the number of players and arms increases, it seems that the original problem becomes a combinatorial problem in choosing the optimal arms. That begs the question about the practicality of this setup apart from the academic interest. In wireless communication, we do time-division multiplexing to avoid collision (same for the traffic intersection problem that you describe).\\n\\n4. I am a little bit confused about $\\\\Delta_k$, here, should not the optimal arm and thus the optimality gap depends on the individual player? Is there an inherent assumption that the rewards are the same for the players?\"}", "{\"title\": \"Response to Reviewer N47W (Part 3)\", \"comment\": \"# Response to Question on Experiments:\\n*Q: In the experiment section, why are the algorithms in [1, 2, 3] papers not considered?*\\n\\nThank you for your question. We have included comparisons with the algorithms from these papers. The experimental results are presented in Section 5 and detailed further in Appendix B. Our algorithms demonstrate superior performance compared to these methods. Additionally, we discuss the results and analyze why these algorithms do not perform well on Page 9.\\n\\nSpecifically, [1] design a special UCB index which decreases when collision occurs. However, as the name suggests, players in this algorithm are selfish and only want to maximize their own rewards. Thus, they fail to utilize the exploration results of others, causing the regret to increase rapidly as $M$ grows. Both Game of Throne [2] and ESER [3] follow an explore-then-commit approach, so they rely on the adjustment of parameters heavily. Meanwhile, MCTopM and RandomTopM from [1] are built on the Musical Chair framework [4], where players randomly preempt a chair with no collision. When delay happens, an arm that is identified to be idle in earlier rounds may already have been preempted by other players, but the player always gets out-of-date feedback, resulting in non-stop exploration to find idle arms.\\n\\nMore importantly, none of these algorithms are involved in simulating communication by collisions. The flow of information between players is helpful for them to find optimal arms and lead that the main term in the regret bound is not multiplied by $M$. Our experimental results in Figure 2 also show that the regret of [1, 2, 3] grows rapidly when $M$ increases, while our algorithms are more stable.\\n\\nAs Reviewer fYEh suggested, we also add experiments on a real-world dataset following the work of [5]. See our update on Page 10. The dataset can be found at https://zenodo.org/records/1293283. The throughput is computed using Shannon Formula which is also aligned with [5]. We also compare the cumulative collisions in this experiment, following the experiment in [5, 6]. The results demonstrate that our algorithms outperform the others.\\n\\nWe observe that the regrets of some algorithms that in our comparison increase rapidly, so we evaluated DDSE in both decentralized and centralized settings as Reviewer j5FD suggested. Experimental results in Figure 10 show that the performance of DDSE in decentralized setting closely matches that in centralized setting.\\n\\n# Response to Minor Issues:\\n*Q: (1) Line 209: $s<T$ or $s<t$. (2) Is there any difference between sequential hopping and round-robin? (3) Why $|[K]|=M$ ?*\\n\\n1. Thank you for pointing this out. We have corrected it to $t$.\\n\\n2. We sincerely appreciate your observation. The terms \\\"sequential hopping\\\" and \\\"round-robin\\\" indeed refer to the same concept. To avoid confusion, we have updated the paper to consistently use \\\"round-robin\\\" throughout our paper.\\n\\n3. We are truly grateful for your attention to the notation. Our previous notation was slightly ambiguous. As explained in \\\"Respond to Question 4\\\", in multi-player bandits, each player has her optimal arm so we totally have $M$ optimal arms. When the leader eliminates all sub-optimal arms, i.e., there are $M$ optimal arms left, she will send collisions in the third block of the next communication phase and then begin exploitation. To clarify, we have introduced an active arm set $\\\\mathcal{K}$ and replaced instances of $[K]$ with $\\\\mathcal{K}$ where appropriate. This ensures the notation aligns with the context more accurately.\\n\\n# Reference\\n[1] Besson, Lilian, and Emilie Kaufmann. \\\"Multi-player bandits revisited.\\\"\\u00a0_Algorithmic Learning Theory_. PMLR, 2018.\\n\\n[2] Bistritz, Ilai, and Amir Leshem. \\\"Distributed multi-player bandits-a game of thrones approach.\\\"\\u00a0_Advances in Neural Information Processing Systems_\\u00a031 (2018).\\n\\n[3] Tibrewal, Harshvardhan, et al. \\\"Distributed learning and optimal assignment in multiplayer heterogeneous networks.\\\"\\u00a0_IEEE INFOCOM 2019-IEEE Conference on Computer Communications_. IEEE, 2019.\\n\\n[4] Rosenski, Jonathan, Ohad Shamir, and Liran Szlak. \\\"Multi-player bandits\\u2013a musical chairs approach.\\\"\\u00a0_International Conference on Machine Learning_. PMLR, 2016.\\n\\n[5] Alipour-Fanid, Amir, et al. \\\"Multiuser scheduling in centralized cognitive radio networks: A multi-armed bandit approach.\\\"\\u00a0_IEEE Transactions on Cognitive Communications and Networking_\\u00a08.2 (2022): 1074-1091.\\n\\n[6] Wang, Wenbo, et al. \\\"Decentralized learning for channel allocation in IoT networks over unlicensed bandwidth as a contextual multi-player multi-armed bandit game.\\\"\\u00a0_IEEE Transactions on Wireless Communications_\\u00a021.5 (2021): 3162-3178.\"}", "{\"title\": \"Response to Reviewer ix9S (Part 3)\", \"comment\": \"## Regret Analysis of DDSE\\n**Exploration Phase**\\n\\nLemma 5 ensures that the delayed feedback from the communication phase of all followers is bounded. Then Lemma 6 establishes the accuracy of the estimates for $\\\\mathbb{E}[d]$ and $\\\\sigma _ d^2$. As a result, player $j$ can correctly determine $q _ j$ and align with the same best empirical arm set, thereby preventing collisions caused by inconsistencies between the leader and the followers. Thus, the regret in exploration phase is generated from (1) selecting sub-optimal arms, (2) players not receiving any feedback initially, and (3) the leader not entering the exploitation phase immediately after identifying all sub-optimal arms. During the period after the leader identifies all sub-optimal arms but before entering the exploitation phase, the leader still needs to maintain consistency with followers by selecting arms in $\\\\mathcal{M} _ {p-q _ M}$. We separately bound these terms in Appendix D.\\n\\n**Communication Phase**\\n\\nWe have already known that the length of each communication phase is $K+2M$. Note that players enter a communication phase every $KM\\\\log(T)$ rounds. The next step is to bound the times that the leader needs to receive feedback and eliminate all sub-optimal arms. Thus, $T _ {expl}/KM\\\\log(T)$ indicates the number of times players enter a communication phase, which we then multiply by the phase length $K+2M$. Note that there are $M$ players in communication phase, so we finally take a union bound of $M$.\\n\\nCombining the results in exploration and communication phase, we derive the regret bound of DDSE. By comparing with the lower bound in Theorem 1, DDSE is near-optimal.\\n\\n# Response to Question 1:\\n*Q: Can the authors highlight the main technical challenges?*\\n\\nWe appreciate the reviewer's insightful question. The main technical challenge lies in accurately estimating the delay within the decentralized multi-armed bandit setting, particularly under the collision model. While delay analysis has been explored previously, our approach introduces a novel consideration of sub-Gaussian delays, which adds complexity to bounding delays during the communication phase. Specifically, we address the challenge by proving the correctness of our estimations for both $\\\\mathbb{E}[d]$ and $\\\\sigma^2 _ d$ (Lemma 2) and analyzing potential errors in delay estimation. Leveraging the fact that the square of a sub-Gaussian variable is sub-Exponential, we rigorously bound the probabilities of deviations $|\\\\hat{\\\\mu} _ {d _ t^j}-\\\\mathbb{E}[d]| \\\\leq \\\\epsilon _ {\\\\mu}$ and $|\\\\hat{\\\\sigma} _ {d _ t^j}^2-\\\\sigma _ d^2| \\\\leq \\\\epsilon _ {\\\\sigma}$. In Lemma 9, applying the inverse Jensen inequality twice allows us to derive $\\\\mathbb{E}[s _ p-s _ {p-1}]$, so that we can utilize $\\\\mathbb{E}[s _ p-s _ {p-1}]=KM\\\\log(T)$. Combining this with the term $T$ on the outside, we finally get the bound of $R_{\\\\mathcal{F}}$.\"}", "{\"title\": \"Response to Reviewer fYEh (Part 1)\", \"comment\": \"Thank you for your comments about our manuscript. We have studied the comments carefully, and find them valuable for improving our paper. The responses to your comments are as follows:\\n# Response to Question 1:\\n*Q: The proposed algorithm takes a leader-follower approach which makes it semi-distributed.*\\n\\nWe agree that many works on decentralized MMAB do not need to assume that $M$ and the ID of each player are known. Actually, we have considered this fully decentralized setting at the beginning of our work. In such fully decentralized multi-player bandits, players do not know the existence of others so there is no guarantee to avoid collisions. Thus, decentralized MMAB algorithms need an initialization phase where players utilize collisions to get some information before exploration so that they can select arms in a round-robin way. There are two kinds of methods for initialization.\\n1. [1, 2, 3] adopt a musical chair method, where each player preempts an arm until no collision happens. After preempting an arm, players then intentionally select some arms. By counting the times of accumulated collision in this period, each player can know $M$ and her ID among these players.\\n2. In [4], players also try to preempt arms once. If they fail to preempt it, i.e., receive a collision, they will select specific arms later to pass the information that she has not find her arm to other players. When no collision occur, it means that everyone finds the proper arm. Then they perform an procedure similar to [1, 2, 3] and get the information on $ M$ and their ID.\\n\\nHowever, when delay is introduced to the environment, players can not receive the collision immediately. If the delay is too long, players must wait in the initialization phase to receive the feedback of collision, because they need to get $M$ and their ID to start an exploration where they can select arms in a round-robin way. Actually, players always hope that they should receive some feedback in a certain known period, which conflicts with the scenario where feedback is delayed for an unknown period.\\n\\nOf course, if we assume that delay is bounded by a known value $d _ {\\\\max}$, the problem of initialization will be solved easily by using the classical technique and waiting for extra $d _ {\\\\max}$ rounds in the beginning. In cognitive radio networks, transmission delay is sometimes bounded by protocol limits, so it is also natural to consider a bounded delay where we can use $d_{\\\\max}$ as the input of algorithms to perform initialization.\\n\\nOur assumption on delay is more mild than this bounded delay. We allow some exceedingly large and unknown delays in the environment. If the assumption on the initialization does not exist, i.e., players are fully decentralized, we can also implement that pre-mentioned technique in our algorithms. For example, players start with an initialization phase. Although their feedback might not be completed, they still begin to explore arms. If a player in exploration has a collision that should not have happened, she notifies others also by sending collisions at a specific time. After a period of delay, players receive the signal and start an initialization again. Then comes the next exploration. By doing so, it is possible to give a probability guarantee using our sub-Gaussian delay assumption. However, it will make the notation more complicated. The existing problem is enough challenging and our work is the first paper to handle delay in multi-player bandits where a collision occurs when selecting the same arm. Our goal is to pave the way for further study in this area. A fully decentralized MMAB can be discussed in future work.\"}", "{\"comment\": \"Thank you for pointing this out and for allowing me to clarify further.\\n\\nThe key lemma in proving Theorem 2 is Lemma 6, which provides the probability of incorrect delay estimation. It should be noted that $\\\\sigma _ d$ appears in the denominator and could potentially be large, which means the bound in Theorem 2 does not always hold with high probability. However, $n$ in Lemma 6 is actually the number of samples on delay. As $t$ increases, $n$ can be very large. This increasing $n$ can balance the impact of a large $\\\\sigma _ d$. \\n\\nWe will improve the proof of Theorem 2. As OpenReview does not allow updates to the PDF now, we briefly explain the improvement here. The expected regret should include a term related to incorrect delay estimation. This term is the probability in Lemma 6 multiplied by $T$. Note that after multiplying by $T$, the regret of incorrect $\\\\hat{\\\\sigma} _ d$ (We take $\\\\hat{\\\\sigma} _ d$ as an example because the regret of incorrect $\\\\hat{\\\\mu} _ d$ is less) is\\n\\n$$\\n{\\\\frac{1}{T}}^{\\\\left[ \\\\frac{nK^2M^2}{320\\\\sqrt{2}\\\\sigma} - 1 \\\\right]}\\n$$\", \"we_consider_two_situations\": \"$n \\\\geq \\\\frac{320\\\\sqrt{2}\\\\sigma _ d}{K^2M^2}$ and $n < \\\\frac{320\\\\sqrt{2}\\\\sigma _ d}{K^2M^2}$. When $n \\\\geq \\\\frac{320\\\\sqrt{2}\\\\sigma _ d}{K^2M^2}$, this regret is directly bounded by $1$. Otherwise, we bound the regret using the number of times players select arms. Due to delay, the number of times players select arms is not the same as $n$, but they will have a relation by applying Lemma 4. Thus, the regret when $n < \\\\frac{320\\\\sqrt{2}\\\\sigma _ d}{K^2M^2}$ should be $\\\\frac{640\\\\sqrt{2}\\\\sigma _ d}{\\\\theta K^2M^2} + d(\\\\theta)$, which does not affect the near-optimality of our result. We will check it carefully and update our proof.\"}", "{\"title\": \"Response to Reviewer fYEh (Part 4)\", \"comment\": \"# Response to Question 7:\\n*Q: Since the paper is highly motivated by cognitive radio applications, I expected some real wireless networks simulations.*\\n\\nWe have done experiments on a real-world dataset following the work of [13]. See our update on Page 10. The dataset can be found at https://zenodo.org/records/1293283. The throughput is computed using Shannon Formula which is also aligned with [13]. We also compare the cumulative collisions in this experiment, following the experiment in [13, 14]. The results demonstrate that our algorithms outperform the others. However, about ns-3 simulation, we are still in progress and aim to complete this experiment before the deadline.\\n\\n# Reference\\n[1] Boursier, Etienne, and Vianney Perchet. \\\"SIC-MMAB: Synchronisation involves communication in multiplayer multi-armed bandits.\\\"\\u00a0_Advances in Neural Information Processing Systems_\\u00a032 (2019).\\n\\n[2] Shi, Chengshuai, et al. \\\"Decentralized multi-player multi-armed bandits with no collision information.\\\"\\u00a0_International Conference on Artificial Intelligence and Statistics_. PMLR, 2020.\\n\\n[3] Huang, Wei, Richard Combes, and Cindy Trinh. \\\"Towards optimal algorithms for multi-player bandits without collision sensing information.\\\"\\u00a0_Conference on Learning Theory_. PMLR, 2022.\\n\\n[4] Wang, Po-An, et al. \\\"Optimal algorithms for multiplayer multi-armed bandits.\\\"\\u00a0_International Conference on Artificial Intelligence and Statistics_. PMLR, 2020.\\n\\n[5] Ahmad, Wan Siti Halimatul Munirah Wan, et al. \\\"5G technology: Towards dynamic spectrum sharing using cognitive radio networks.\\\"\\u00a0_IEEE access_\\u00a08 (2020): 14460-14488.\\n\\n[6] Wang, Shanshan, Junshan Zhang, and Lang Tong. \\\"Delay analysis for cognitive radio networks with random access: A fluid queue view.\\\"\\u00a0_2010 Proceedings IEEE INFOCOM_. IEEE, 2010.\\n\\n[7] Laourine, Amine, Shiyao Chen, and Lang Tong. \\\"Queuing analysis in multichannel cognitive spectrum access: A large deviation approach.\\\"\\u00a0_2010 Proceedings IEEE INFOCOM_. IEEE, 2010.\\n\\n[8] Li, Yandi, and Jianxiong Guo. \\\"A Modified EXP3 and Its Adaptive Variant in Adversarial Bandits with Multi-User Delayed Feedback.\\\"\\u00a0_arXiv preprint arXiv:2310.11188_\\u00a0(2023).\\n\\n[9] van der Hoeven, Dirk, et al. \\\"A Unified Analysis of Nonstochastic Delayed Feedback for Combinatorial Semi-Bandits, Linear Bandits, and MDPs.\\\"\\u00a0_The Thirty Sixth Annual Conference on Learning Theory_. PMLR, 2023.\\n\\n[10] Wang, Dairui, et al. \\\"Cascading bandits: optimizing recommendation frequency in delayed feedback environments.\\\"\\u00a0_Advances in Neural Information Processing Systems_\\u00a036 (2024).\\n\\n[11] Richard, Hugo, Etienne Boursier, and Vianney Perchet. \\\"Constant or Logarithmic Regret in Asynchronous Multiplayer Bandits with Limited Communication.\\\"\\u00a0_International Conference on Artificial Intelligence and Statistics_. PMLR, 2024.\\n\\n[12] Lattimore, Tor, and Csaba Szepesv\\u00e1ri.\\u00a0_Bandit algorithms_. Cambridge University Press, 2020.\\n\\n[13] Alipour-Fanid, Amir, et al. \\\"Multiuser scheduling in centralized cognitive radio networks: A multi-armed bandit approach.\\\"\\u00a0_IEEE Transactions on Cognitive Communications and Networking_\\u00a08.2 (2022): 1074-1091.\\n\\n[14] Wang, Wenbo, et al. \\\"Decentralized learning for channel allocation in IoT networks over unlicensed bandwidth as a contextual multi-player multi-armed bandit game.\\\"\\u00a0_IEEE Transactions on Wireless Communications_\\u00a021.5 (2021): 3162-3178.\"}", "{\"title\": \"Response to Reviewer j5FD (Part 4)\", \"comment\": \"# Reference\\n[1] Boursier, Etienne, and Vianney Perchet. \\\"SIC-MMAB: Synchronisation involves communication in multiplayer multi-armed bandits.\\\"\\u00a0_Advances in Neural Information Processing Systems_\\u00a032 (2019).\\n\\n[2] Shi, Chengshuai, et al. \\\"Decentralized multi-player multi-armed bandits with no collision information.\\\"\\u00a0_International Conference on Artificial Intelligence and Statistics_. PMLR, 2020.\\n\\n[3] Huang, Wei, Richard Combes, and Cindy Trinh. \\\"Towards optimal algorithms for multi-player bandits without collision sensing information.\\\"\\u00a0_Conference on Learning Theory_. PMLR, 2022.\\n\\n[4] Wang, Po-An, et al. \\\"Optimal algorithms for multiplayer multi-armed bandits.\\\"\\u00a0_International Conference on Artificial Intelligence and Statistics_. PMLR, 2020.\\n\\n[5] Besson, Lilian, and Emilie Kaufmann. \\\"Multi-player bandits revisited.\\\"\\u00a0_Algorithmic Learning Theory_. PMLR, 2018.\\n\\n[6] Bistritz, Ilai, and Amir Leshem. \\\"Distributed multi-player bandits-a game of thrones approach.\\\"\\u00a0_Advances in Neural Information Processing Systems_\\u00a031 (2018).\\n\\n[7] Tibrewal, Harshvardhan, et al. \\\"Distributed learning and optimal assignment in multiplayer heterogeneous networks.\\\"\\u00a0_IEEE INFOCOM 2019-IEEE Conference on Computer Communications_. IEEE, 2019.\\n\\n[8] Richard, Hugo, Etienne Boursier, and Vianney Perchet. \\\"Constant or Logarithmic Regret in Asynchronous Multiplayer Bandits with Limited Communication.\\\"\\u00a0_International Conference on Artificial Intelligence and Statistics_. PMLR, 2024.\"}", "{\"title\": \"Thanks again\", \"comment\": \"Thank you for answering my follow-up questions. It indeed clarified my doubt on how to find the optimal set of arms in case the rewards are different for different players. However, here the setting is that the rewards are the same across the players.\\n\\nTwo more questions-- Regret is generally associated with the high-probability bound, however, I did not see those effects in Theorems 1 and 2, and we do not see this effect. Is it an average regret?\\n\\nDo the authors have a lower bound result on the delay-distribution parameters to show the tightness of the result?\"}", "{\"title\": \"Response to Reviewer N47W\", \"comment\": \"Thanks for your valuable feedback. We recognize that [1] includes a signaling phase, and we acknowledge that there were inaccuracies in our initial explanation of this work. However, we would like to emphasize that our experiments were conducted rigorously and correctly. This algorithm was implemented using an open-source GitHub library [2]. We have updated our supplementary material since 22 Nov 2024. For further clarification, we invite you to review the code provided in \\\"$\\\\texttt{ReviewCode/tibrewal2019}$\\\". It should be noted that ESER performs well when parameters are appropriate (e.g. Figure 2 (a)), but its regret grows rapidly when $K$ and $M$ increase.\\n\\n[1] Tibrewal, Harshvardhan, et al. \\\"Distributed learning and optimal assignment in multiplayer heterogeneous networks.\\\"\\u00a0_IEEE INFOCOM 2019-IEEE Conference on Computer Communications_. IEEE, 2019.\\n\\n[2] Wang, Wenbo, et al. \\\"Decentralized learning for channel allocation in IoT networks over unlicensed bandwidth as a contextual multi-player multi-armed bandit game.\\\"\\u00a0_IEEE Transactions on Wireless Communications_\\u00a021.5 (2021): 3162-3178. https://github.com/wbwang2020/MP-MAB\"}", "{\"title\": \"Response to Reviewer j5FD (Part 2)\", \"comment\": \"## The first block is used for removing an arm from $\\\\mathcal{M}^j _ p$ .\\nThe leader first finds the bad arm $a^- _ p$ in $\\\\mathcal{M}^j _ p$ and identifies the position of $a^- _ p$ . Then the leader selects the $i _ {a^- _ p}$ -th arm in $\\\\mathcal{M}^j _ {p-q _ j}$ for $M$ times. Followers still select arms in $\\\\mathcal{M}^j _ {p-q _ j}$ in a round-robin way. After a collision occurs, followers save the position that a collision happens. Then when she gets both the position of the arm to be removed and the arm to be added, she will update the best arm set. \\nIn the example, players in phase $2$ (i.e., $p=2$) but they still use $\\\\mathcal{M} _ 1^j$. $t _ 1$ to $t _ 4$ denotes the rounds in the first block. We suppose that the leader wants to remove arm $1$ from $\\\\mathcal{M} _ 2^4$. The arm selection is in the table.\\n\\n| | $t _ 1$ | $t _ 2$ | $t _ 3$ | $t _ 4$ |\\n| ---------- | ----- | ----- | ----- | ----- |\\n| Leader | 3 | 3 | 3 | 3 |\\n| Follower 1 | 4 | 2 | 6 | 3 |\\n| Follower 2 | 2 | 6 | 3 | 4 |\\n| Follower 3 | 6 | 3 | 4 | 2 |\\n\\nEach follower collides on arm $3$ once. They remember the position of collision in $p=2$ is $4$. \\n\\n## The second block is used for adding an arm from $\\\\mathcal{M}^j _ p$.\\nThe leader finds a better arm that has higher empirical rewards but not in $\\\\mathcal{M}^j _ p$. Then she wants to pass the new arm to followers. Because the new arm $a^+ _ p$ might not be in $\\\\mathcal{M}^j _ p$ or $\\\\mathcal{M}^j _ {p-q _ j}$, we can not let followers receive the collision information via the best arm set. Thus, we utilize the whole arm set. The leader continuously selects $a^+ _ p$ for $K$ times. Followers select all arms in a round-robin way. This block continues $K$ times because the length of the original arm is $K$ and we hope to use the original whole arm set to pass the information.\\n\\nIn the example, $t _ 5$ to $t _ 6$ denotes the rounds in the second block. We suppose that the leader wants to add arm $1$.\\n\\n| | $t _ 5$ | $t _ 6$ | $t _ 7$ | $t _ 8$ | $t _ 9$ | $t _ {10}$ |\\n| ---------- | ----- | ----- | ----- | ----- | ----- | -------- |\\n| Leader | 5 | 5 | 5 | 5 | 5 | 5 |\\n| Follower 1 | 1 | 2 | 3 | 4 | 5 | 6 |\\n| Follower 2 | 2 | 3 | 4 | 5 | 6 | 1 |\\n| Follower 3 | 3 | 4 | 5 | 6 | 1 | 2 |\\n\\nEach follower collides on arm $5$ once. They remember that the arm where the collision happens is $5$. \\n## The third block is used for passing the ending of exploration.\\nLeader explores and gradually eliminates all sub-optimal arms from $[K]$. However, as for followers, they do not know when the exploration of the leader ends. If the leader's exploration has ended and passes all optimal arms to followers, continuing to enter the communication phase is a waste of time for followers. Therefore, a block used for passing the ending of exploration is necessary. Players' actions are similar to the first block.\\n## After a period of delay:\\nNote that the collisions in the first and second blocks can not be received immediately. After a period of delay, follower $j$ receives the position $4$ and the arm to be added is $5$. She also knows the feedback is from phase $2$. Therefore, she updates:\\n$$\\n\\\\mathcal{M} _ 1^j = \\\\\\\\{4, 2, 6, 3\\\\\\\\} \\\\rightarrow \\\\mathrm{replace\\\\ the\\\\ arm\\\\ in\\\\ position\\\\ 4\\\\ to\\\\ arm\\\\ 5} \\\\rightarrow\\\\mathcal{M}^j _ 2 = \\\\\\\\{4, 2, 6, 5\\\\\\\\} \\n$$\\nNo matter how late the feedback is received by the follower, she can update the correct $\\\\mathcal{M} _ p^j$.\\n\\n# Response to Weakness 3:\\n*Q: I suggest the authors to move Subsection 5.3 ahead of Subsection 5.1 for better logic, as the centralized lower bound serve as the benchmark.*\\n\\nWe sincerely thank the reviewer for the valuable suggestion regarding the organization of the theoretical analysis section. In response, we have moved the lower bound to the beginning of this section, as it serves as a natural benchmark for the subsequent analysis.\\n# Response to Weakness 4:\\n\\n*Q: In the experiments, the number of players\\u00a0$M$\\u00a0is a relatively small compared to typical application of cognitive radio networks.*\\n\\nWe truly appreciate that the reviewer points out the concern regarding the relatively small number of players. To address this, we have conducted additional experiments with larger parameters, specifically for $M=30$ and $M=40$. The results, which are now included in Appendix B, demonstrate that our algorithms continue to outperform others as the number of players increases.\"}", "{\"metareview\": \"The paper addresses the multi-player multi-armed bandit (MP-MAB) problem within cognitive radio networks, where multiple users select channels (arms) and receive immediate feedback. Traditional research in this area typically focuses on the nature of immediate feedback, such as whether it includes both reward and collision information or just the reward. However, in real-world cognitive networks, spectrum sensing often introduces delays in feedback, complicating the decision-making process. To tackle this, the authors propose the Decentralized Delayed Successive Elimination (DDSE) algorithm, specifically designed to handle stochastic delays in feedback. Unlike existing algorithms that do not account for such delays, DDSE enables players to adapt to delayed information and effectively avoid collisions, enhancing overall network performance.\\n\\nThe authors establish a theoretical regret bound for DDSE, demonstrating its efficiency and near-optimal performance by deriving a corresponding lower bound in a centralized setting. This theoretical validation highlights DDSE\\u2019s superiority over existing approaches that fail to manage delayed feedback. Additionally, the paper presents comprehensive numerical experiments using both synthetic and real-world datasets, which confirm the algorithm\\u2019s effectiveness and practical applicability. Overall, the study makes significant contributions by introducing a robust solution for delayed feedback scenarios in MP-MAB problems, relevant to applications like cognitive radio networks, and by providing both theoretical and empirical evidence of its advantages.\\n\\nThe reviewers' evaluations have wide margins. Two reviewers, specifically, raise questions about the contribution beyond the existing literature, some inconsistent theoretical and empirical observations, and clarity in presentation.\", \"additional_comments_on_reviewer_discussion\": \"Most reviewers and authors engaged in the discussion period. The discussions, however, did not change the verdits of the critical reviewers.\"}", "{\"summary\": \"Multi-player multi-armed bandits have been researched for a long time due to their application in cognitive radio networks. In this setting, multiple players select arms at each time and instantly receive the feedback. Most research on this problem focuses on the content of the immediate feedback, whether it includes both the reward and collision information or the reward alone. However, delay is common in cognitive networks when users perform spectrum sensing. This paper designs a decentralized delayed successive elimination (DDSE) algorithm in multi-player multi-armed bandits with stochastic delay feedback and establish a regret bound. This algorithm enables players to adapt to delayed feedback and avoid collision.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"In order to address the challenge of delay in cognitive radio networks, this paper proposes a novel bandit framework where multiple players engage in a multi-armed bandit and if two or more players select the same arm, none of them receive the reward. In this framework, players receive feedback after a period of stochastic delay, which complicates their ability to learn and adapt in real time, making it exceedingly difficult to avoid collisions and optimize performance. To solve this problem, this paper designs a DDSE algorithm in multi-player multi-armed bandits with stochastic delay feedback and establish a regret bound of the proposed algorithm.\", \"weaknesses\": \"The paper provides a series of technical results, but the writing is muddled and many key symbols are not explained. It is very hard to get some intuition about the approach and its possible advantages and disadvantages. In addition, the description of the algorithm is full of confusion, with many unexplained symbols inside.\\n\\n1. Why set the length of each communication phase as $K+2M$? The authors should explain the reasons for the design. If the length of communication phase becomes time-varying, will the methods in this paper still apply?\\n\\n2. The paper provides an analysis of the lower bound for centralized algorithm in Theorem 3, but lacks an analysis of the lower bound for decentralized algorithm, which should be the main focus of the paper.\\n\\n3. According to Theorem 1 and Theorem 2, DDSE has better convergence performance than DDSE without delay estimation. However, in larger scenarios (Fig. 4(d)), DDSE without delay estimation performs better than DDSE. What is the significance of considering delay estimation in delay estimation algorithms in large-scale scenarios?\\n\\n4. This paper lacks a description of the proof process for the theorems. In addition, the result of Theorem 1 is complex and the paper lacks specific explanations for these terms.\", \"questions\": \"1. The writing is confused, many symbols are written incorrectly, and there are symbols that are not explained. For example,\\n(1) On line 157 of page 3, $r^{j}(s)$ should be written as $r^{j}_{k}(s)$; $\\\\mu_{k}$;\\n(2) what is the difference between $\\\\mu_{k}$ and $\\\\mu_{(k)}$;\\n(3) The definition of $N_{t}(k)$ on line 210 of page 4 is error; \\n(4) The $\\\\mathcal{M}_{0}$ in Algorithm 1 should be $\\\\mathcal{M}^{M}_{0}$;\\n(5) What is the $\\\\mathcal{M}_{com}$ in Algorithm 1?\\n\\n2. The introduction of the Algorithm 1 is very confusing. For example,\\n(1) What does the line 10 line of the Algorithm 1?\\n(2) In the model, author claim that $M\\\\leq K$, but in the Algorithm 1, $|[K]|=M$ is used as a criterion for judgment. Please explain this issue.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Concerns with experiments\", \"comment\": \"\\\"None of these algorithms are involved in simulating communication by collisions\\\" means that [1, 2, 3] do not design a specific communication phase where players pass the reward or arm choice to others. Players in [1, 2, 3] do not utilize the exploration results of others. Therefore, the regrets in [1, 2, 3] are multiplied by $M$. Our regret of DDSE is $O(\\\\log(T))$ due to the communication phase where players communicate on the update of $\\\\mathcal{M} _ {p}^j$.\\n\\n\\n| Algorithms | Regret |\\n| ------------------------------ | ------------------------------------------ |\\n| Game of Throne [1] | $O(M\\\\log^{2+\\\\delta}(T))$ |\\n| MCTopM-kl_UCB [2] | $G _{M,\\\\mathbf{\\\\mu}}\\\\log(T)$ |\\n| Selfish [2] | unknown |\\n| ESER (known $\\\\Delta _{\\\\min}$) [3] | $O(M^2K\\\\log(T))$ |\\n| ESER (unknown $\\\\Delta _{\\\\min}$) [3] | $O(M^2K\\\\Delta _{\\\\max}\\\\log(T)^{1+\\\\beta}(T))$ |\\n| Ours | $O(\\\\log(T))$ |\\n\\n[1, 2, 3] only do experiments on small $K$ and $M$. When $M$ increases, the results are bad. In our experiments, we do experiments with at most $K=50$ and $M=40$. \\n\\n\\n| | Experiment parameter |\\n| --- | ------------------------- |\\n| [1] | $K=M=5$ |\\n| [2] | at most $K=9$ and $M=6$ |\\n| [3] | $K=12$, $M=\\\\{6, 10, 12\\\\}$ |\\n\\nWe originally did not compare with [1, 2, 3] because they do not have the communication phase. In contrast, comparison with SIC-MMAB [4] is more suitable. Experimental results in Figure 10 have already shown that performance of DDSE in decentralized setting closely matches that in the centralized setting. We do not know why Reviewer N47W asks us to compare with [1, 2, 3] and is not satisfied with the comparison results. However, we would like to explain the experiments if you still have questions.\\n\\n[1] Bistritz, Ilai, and Amir Leshem. \\\"Distributed multi-player bandits-a game of thrones approach.\\\"\\u00a0_Advances in Neural Information \\n\\n[2] Besson, Lilian, and Emilie Kaufmann. \\\"Multi-player bandits revisited.\\\"\\u00a0_Algorithmic Learning Theory_. PMLR, 2018.\\n\\n[3] Tibrewal, Harshvardhan, et al. \\\"Distributed learning and optimal assignment in multiplayer heterogeneous networks.\\\"\\u00a0_IEEE INFOCOM 2019-IEEE Conference on Computer Communications_. IEEE, 2019.\\n\\n[4] Boursier, Etienne, and Vianney Perchet. \\\"SIC-MMAB: Synchronisation involves communication in multiplayer multi-armed bandits.\\\" Advances in Neural Information Processing Systems 32 (2019).\"}", "{\"title\": \"Response to Reviewer j5FD (Part 1)\", \"comment\": \"Thank you for your comments about our manuscript. We have studied the comments carefully, and find them valuable for improving our paper. The responses to your comments are as follows:\\n# Response to Weakness 1:\\n\\u00a0*Q: This DDSE algorithm is not fully decentralized. In many cognitive radio networks, sensing nodes are dynamic, and some nodes are even hidden or unknown to the network operator.*\\n\\nWe have considered a fully decentralized setting at the beginning of our work. In such fully decentralized multi-player bandits, players do not know the existence of others so there is no guarantee to avoid collisions. Thus, decentralized MMAB algorithms need an initialization phase where players utilize collisions to get some information before exploration, so that they can select arms in a round-robin way. There are two kinds of methods for initialization.\\n\\n1. [1, 2, 3] adopt a musical chair method, where each player preempts an arm until no collision happens. After preempting an arm, players then intentionally select some arms. By counting the times of accumulated collision in this period, each player can know $M$ and her ID among these players.\\n\\n2. In [4], players also try to preempt arms once. If they fail to preempt it, i.e., receive a collision, they will select specific arms later to pass the information that she has not found her arm to other players. When no collision occurs, it means that everyone finds the proper arm. Then they perform a procedure similar to [1, 2, 3] and get the information on $M$ and their ID.\\n\\nHowever, when delay is introduced to the environment, players can not receive the collision immediately. If the delay is too long, players must wait in the initialization phase to receive the feedback of collision, because they need to get $M$ and their ID to start an exploration where they can select arms in a round-robin way. Actually, players always hope that they should receive some feedback in a certain known period, which conflicts with the scenario where feedback is delayed for an unknown period.\\n\\nOf course, if we assume that delay is bounded by a known value $d _ {\\\\max}$, the problem of initialization will be solved easily by using the classical technique and waiting for extra $d _ {\\\\max}$ rounds in the beginning. In cognitive radio networks, transmission delay is sometimes bounded by protocol limits, so it is also natural to consider a bounded delay where we can sue $d _ {\\\\max}$ as the input of algorithms to perform an initialization.\\n\\nOur assumption on delay is more mild than this bounded delay. We allow some exceedingly large and unknown delay in the environment. If the assumption on the initialization does not exist, i.e., players are fully decentralized, we can also implement that pre-mentioned technique in our algorithms. For example, players start with an initialization phase. Although their feedback might not be completed, they still begin to explore arms. If a player in exploration has a collision that should not have happened, she notifies others also by sending collisions at a specific time. After a period of delay, players receive the signal and start an initialization again. Then comes the next exploration. By doing so, it is possible to give a probability guarantee using our sub-Gaussian delay assumption. However, it will make the notation more complicated. The existing problem is enough challenging and our work is the first paper handling delay in multi-player bandits where a collision occurs when selecting the same arm. Our goal is to pave the way for further study in this area. A fully decentralized MMAB can be discussed in future work.\\n\\n# Response to Weakness 2:\\n*Q: The communication assumption weakens the solution in this work.*\\n\\nWe do not assume that players can freely communicate. They can only know $M$ and their own ID. The communication phase is designed to solve the difficulty that player can not directly communicate. We review that a player $j$ receives feedback on the reward of the arm that she selected several rounds ago and whether she has a collision on the arm. The collision indicates that there exists at least another player who also selected the same arm. In our algorithm design, collisions that happen on specific periods represent some implicit information. Thus, by intentionally sending collisions, players can pass some information to others. \\n\\nNext, we explain our communication phase here. The primary goal of communication phase is to pass the update of $\\\\mathcal{M} _ p^j$. Each communication phase is composed of three blocks. We consider $M=4$ and $K=6$. The example is:\\n\\n\\\\begin{aligned} \\n\\\\mathrm{Leader:} \\\\ &\\\\mathcal{M}^4_1 = \\\\\\\\{4, 2, 6, 3\\\\\\\\}, \\\\mathcal{M}^4_2 = \\\\\\\\{4, 2, 6, 1\\\\\\\\}, \\\\mathcal{M}^4_3 = \\\\\\\\{4, 2, 6, 5\\\\\\\\} \\\\\\\\\\\\\\\\ \\n\\\\mathrm{Follower\\\\ 1:} \\\\ &\\\\mathcal{M}^1_1 = \\\\\\\\{4, 2, 6, 3\\\\\\\\} \\\\\\\\\\\\\\\\\\n\\\\mathrm{Follower\\\\ 2:} \\\\ &\\\\mathcal{M}^2_1 = \\\\\\\\{4, 2, 6, 3\\\\\\\\} \\\\\\\\\\\\\\\\\\n\\\\mathrm{Follower\\\\ 3:} \\\\ &\\\\mathcal{M}^3_1 = \\\\\\\\{4, 2, 6, 3\\\\\\\\}\\n\\\\end{aligned}\"}", "{\"title\": \"Response to Reviewer WghT (Part 3)\", \"comment\": \"# Response to Weakness 4:\\n*Q: This paper lacks a description of the proof process for the theorems.*\\n\\nThanks for your advice. We have added some description in Appendix D. We briefly describe the proof of Theorem 1 here.\\n## Exploration Phase\\nIn the exploration phase, Lemma 5 ensures that the delayed feedback from the communication phase of all followers is bounded. Then Lemma 6 establishes the accuracy of the estimates for $\\\\mathbb{E}[d]$ and $\\\\sigma _ d^2$. As a result, player $j$ can correctly determine $q _ j$ and align with the same best empirical arm set, thereby preventing collisions caused by inconsistencies between the leader and the followers. Thus, the regret in exploration phase is generated from (1) selecting sub-optimal arms, (2) players not receiving any feedback initially, and (3) the leader not entering the exploitation phase immediately after identifying all sub-optimal arms. During the period after the leader identifies all sub-optimal arms but before entering the exploitation phase, the leader still needs to maintain consistency with followers by selecting arms in $\\\\mathcal{M} _ {p-q _ M}$, i.e., $|\\\\mathcal{K}|=M$, $e _ M=M$ but $q _ M\\\\neq 0$ which do not satisfy Line 14 in Algorithm 1. Then we bound these terms separately in Appendix D.\\n## Communication Phase\\nWe have already known that the length of each communication phase is $K+2M$. Note that players enter a communication phase every $KM\\\\log(T)$ rounds. The next step is to bound $T_{expl}$ which is the total time that the leader needs to receive feedback and eliminate all sub-optimal arms. Thus, $T _ {expl}/KM\\\\log(T)$ indicates the number of times players enter a communication phase, which we then multiply by the phase length $K+2M$. Note that there are $M$ players in communication phase, so we finally take a union bound of $M$.\\n\\n# Response to Question 1:\\n*Q: The writing is confused, many symbols are written incorrectly.*\\n\\nWe sincerely thank the reviewer for the careful reading and valuable feedback. All noted issues with symbols and writing have been addressed. Below, we provide our detailed responses to the points raised.\\n1. We use the definition $r^j(s)$ because the reward has already been determined by a given player $j$ and a round $s$. There is no need to add a notation of $k$.\\n2. As stated in Line 154, $\\\\mu_ {(k)}$ is the $k$-th highest reward. We have a order $\\\\mu _ {(1)} \\\\geq \\\\mu _ {(2)} \\\\geq ... \\\\geq \\\\mu _ {(K)}$. We define $\\\\mu _ k$ as the expectation of arm $k$.\\n3. Thank you very much for pointing out this one. We have updated it in Line 203. $N_t(k):= \\\\sum_{s \\\\leq t} \\\\mathbb{I}\\\\{\\\\pi_s^j=k, j=M\\\\}$ is the number of times that the leader chooses arm $k$ before $t$.\\n4. We truly appreciate your attention to this detail. The notation has been updated to $\\\\mathcal{M} _ 0^M$.\\n5. Thank you for highlighting this point. $\\\\mathcal{M} _ {com}$ is originally used for determining whether to communicate the update of $\\\\mathcal{M}^j _ p$. If $\\\\mathcal{M}^M _ {p}=\\\\mathcal{M}^M _ {p-1}$, the leader do not need to communicate with followers. We have removed the notation of $\\\\mathcal{M} _ {com}$ and change the if-condition in Algorithms 1 into \\\"$\\\\mathtt{if}\\\\ \\\\mathcal{M}^M _ p \\\\neq \\\\mathcal{M}^M _ {p-1}\\\\ \\\\mathtt{then}$ \\\". If $\\\\mathcal{M}^M _ p \\\\neq \\\\mathcal{M}^M _ {p-1}$, the leader send collisions in communication phase. Otherwise, she runs a virtual communication to select arms with followers in $\\\\mathcal{M}^M _ {p-q_M}$ with no collision.\\n\\n# Response to Question 2:\\n*Q: The introduction of the Algorithm 1 is very confusing. e.g. (1) Line 10 line of the Algorithm 1, (2) concern about $|[K]|=M$.*\\n\\n1. We have removed the notation of $\\\\mathcal{M} _ {com}$ in Line 10 of Algorithm 1 and use \\\"$\\\\mathtt{if}\\\\ \\\\mathcal{M}^M _ p \\\\neq \\\\mathcal{M}^M _ {p-1}\\\\ \\\\mathtt{then}$ \\\" as the if-else condition. The detailed explanation and example are in \\\"Response to Weakness 1\\\". Thank you for pointing out this improvement.\\n2. The leader explores and gradually eliminates all sub-optimal arms. As stated in \\\"Response to Weakness 1\\\", in multi-player bandits, each player has her optimal arm so we totally have $M$ optimal arms. When the leader eliminates all sub-optimal arms, i.e., there are $M$ optimal arms left, the arm set shrinks into the length of $M$. Therefore, when $[K]$ shrinks to only $M$ elements, the exploration ends.\\n\\n# Reference\\n[1] Boursier, Etienne, and Vianney Perchet. \\\"SIC-MMAB: Synchronisation involves communication in multiplayer multi-armed bandits.\\\"\\u00a0_Advances in Neural Information Processing Systems_\\u00a032 (2019).\\n\\n[2] Shi, Chengshuai, et al. \\\"Decentralized multi-player multi-armed bandits with no collision information.\\\"\\u00a0_International Conference on Artificial Intelligence and Statistics_. PMLR, 2020.\\n\\n[3] Huang, Wei, Richard Combes, and Cindy Trinh. \\\"Towards optimal algorithms for multi-player bandits without collision sensing information.\\\"\\u00a0_Conference on Learning Theory_. PMLR, 2022.\"}", "{\"title\": \"Response to Official Comment\", \"comment\": \"We acknowledge that in extreme cases with severe congestion, the delays may not adhere to a sub-Gaussian distribution. In such situations, the delay distribution might exhibit heavy-tailed characteristics, and our assumptions would need to be adjusted accordingly. We plan to address this limitation in our revised manuscript by including a discussion on the applicability of our delay assumptions and potential extensions to handle heavy-tailed delay distributions.\\n\\nThank you for your valuable feedback. Your thoughts help us improve the clarity and robustness of our work.\"}", "{\"title\": \"Response to Reviewer fYEh (Part 3)\", \"comment\": \"# Response to Question 4:\\n*Q: It is not clear why there is a provision of eliminating arms for which LCB is bigger than UCB. Please specify the motivation behind the virtual communication phase in details.*\\n## Arm Elimination\\nIn multi-player bandits, there are $M$ players and $K$ arms. The ultimate goal is that every player is assigned an optimal arm. Thus, there exist at least $M$ optimal arms. In our algorithms, the leader explores all arms and puts $M$ arms with the $M$-th highest empirical rewards into the best arm set $\\\\mathcal{M} _ p^j$. Therefore, if the potential reward of arm $k$ is worse than that of at least $M$ arms, it should be considered a bad arm because we already have at least $M$ arms that are better than $k$. Then we can eliminate arm $k$. To evaluate how the \\\"potential reward\\\" is, we introduce the LCB and UCB which \\nare lower confidence bound and upper confidence bound [12]. They are separately defined as:\\n$$\\nLCB_t(k):=\\\\hat{\\\\mu}_k(t) - \\\\sqrt{\\\\frac{2\\\\log(T)}{n_t(k)}},\\\\ UCB_t(k):=\\\\hat{\\\\mu}_k(t) + \\\\sqrt{\\\\frac{2\\\\log(T)}{n_t(k)}}.\\n$$\\n$\\\\hat{\\\\mu}_k(t)$ is the empirical reward expectation of arm $k$ at time $t$ and $n_t(k)$ denotes the number of times that the leader receives feedback on arm $k$ up to $t$. If $UCB_k(t)$ is higher, we deem that arm $k$ is better. In the equation, the higher first term $\\\\hat{\\\\mu}_k(t)$ means higher current rewards, which represent exploitation. Note that $T$ is the total time and $n_t(k)$ indicates how familiar we are with this arm. Higher $T$ and lower $n_t(k)$ indicate that we have less knowledge of arm $k$. Thus, the higher second term $\\\\sqrt{\\\\frac{2\\\\log(T)}{n_t(k)}}$ means the degree of uncertainty, which represent exploration. By Hoeffding's inequality, we have \\n\\n$$\\nP\\\\left(\\\\left|\\\\hat{\\\\mu}_k(t)-\\\\mu_k\\\\right| \\\\leq \\\\sqrt{\\\\frac{2\\\\log(T)}{n_t(k)}}\\\\right) \\\\geq 1-2(\\\\frac{1}{T})^4\\n$$\\nThus, by picking an arm with the highest UCB, the player can finally find the optimal solution. \\nIn our algorithms, if the UCB of arm $k$ is worse than at least $M$ arms' LCB, the arm $k$ will be eliminated because optimal arms exist in the left arms. By eliminating arms gradually, the leader can find $M$ optimal arms in total.\\n## Motivation behind Virtual Communication\\nWe have modified our algorithms by removing the notation of $\\\\mathcal{M} _ {com}$ as reviewer WghT suggested. Here we briefly describe the communication phase. The leader passes the update of $\\\\mathcal{M} _ p^j$ to followers at each communication phase. In the communication phase, collisions represent removing or adding an arm to $\\\\mathcal{M} _ p^j$. The collision is also used for passing the ending signal of exploration. However, if $\\\\mathcal{M} _ p^j=\\\\mathcal{M} _ {p-1}^j$ and the exploration does not end, it is not necessary for the leader to pass information in the communication phase. From the point of followers, they do not know which communication phase does not have information, so they choose to enter every communication phase between a fixed gap. To maintain alignment with followers, the leader should enter a \\\"virtual communication\\\" phase if she does not want to pass any information. In virtual communication, the leader selects arms with followers and does not send collisions. Later, if followers do not receive any feedback from certain communication phase, they can know that the best arm set $\\\\mathcal{M} _ {p}^j$ is the same as the prior best arm set $\\\\mathcal{M} _ {p-1}^j$.\\n\\nIf virtual communication does not exist, we have the following problems: (1) Because followers always enter a communication phase between a fixed gap, in the second block of communication, they select arms from the whole arm set $[K]$. If the leader does not make adjustments, she might collide with followers. This collision will be regarded by followers as information about adding an arm, but actually, the information is wrong. (2) Followers update $\\\\mathcal{M} _ {p}^j$ based on $\\\\mathcal{M} _ {p-1}^j$, if $\\\\mathcal{M} _ {p-1}^j$ is blank or wrong as (1) mentioned, the updated $\\\\mathcal{M} _ {p}^j$ might also be wrong.\\n\\nTherefore, virtual communication is critical in our algorithm.\\n# Response to Question 5:\\n*Q:\\u00a0Please provide a pointer.*\\n\\nThank you for your comment. We have updated the paper to include a clear reference to the result where the upper bound on the feedback delay is derived. This result is now explicitly linked to Lemma 1.\\n\\n# Response to Question 6:\\n\\u00a0*Q: Can the authors quantify the gap between the lower bound and upper bound on the regret of the proposed algorithm?*\\n\\u00a0\\nThank you for the suggestion. We have updated the paper to address this point. The first term in Theorem 2 (regret of DDSE) is aligned with Theorem 1 (lower bound) up to constant factors. The observed difference comes from the decentralized setting, where players cannot directly communicate about rewards or collisions. Importantly, the regret introduced by the decentralized structure and delay remains independent of $T$. Therefore, our result is near-optimal.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Response to Follow-up Questions\", \"comment\": \"Thank you very much for your thoughtful and detailed feedback. We sincerely appreciate the time and effort you have taken to review our work and provide valuable questions.\\n# Response to Question 1\\nWe assume that players are initialized with their own ID. The player with ID $M$ becomes the leader and others are followers. \\n# Response to Question 2\\nYes, the leader uniformly picks arms from the active arm set $\\\\mathcal{K}$. Specifically, she first pulls arms in the set of best empirical arms with followers. Then she selects other arms in $\\\\mathcal{K}$ in a round-robin way while skipping arms in the best arm set. We also use an example to explain the process. Let $K=8$ and $M=4$:\\n\\n\\\\begin{aligned} \\n\\\\mathrm{Leader:} \\\\ &\\\\mathcal{M}^4 _ 1 = \\\\\\\\{4, 2, 6, 3\\\\\\\\} \\\\quad \\\\mathrm{to} \\\\quad \\\\mathcal{M}^4_2 = \\\\\\\\{4, 2, 6, 8\\\\\\\\} \\\\\\\\\\\\\\\\ \\n\\\\mathrm{Follower\\\\ 1:} \\\\ &\\\\mathcal{M}^1 _ 1 = \\\\\\\\{4, 2, 6, 3\\\\\\\\} \\\\\\\\\\\\\\\\\\n\\\\mathrm{Follower\\\\ 2:} \\\\ &\\\\mathcal{M}^2 _ 1 = \\\\\\\\{4, 2, 6, 3\\\\\\\\} \\\\\\\\\\\\\\\\\\n\\\\mathrm{Follower\\\\ 3:} \\\\ &\\\\mathcal{M}^3 _ 1 = \\\\\\\\{4, 2, 6, 3\\\\\\\\}\\n\\\\end{aligned}\\n\\nIn this example, we suppose that players are focusing on $\\\\mathcal{M} _ 1^j$. They select arms as follows:\\n\\n| Player | $t_1$ | $t_2$ | $t_3$ | $t_4$ | $t_5$ | $t_6$ | $t_7$ | $t_8$ |\\n| ---------- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- |\\n| Leader | 3 | 4 | 2 | 6 | 1 | 5 | 7 | 8 |\\n| Follower 1 | 4 | 2 | 6 | 3 | 4 | 2 | 6 | 3 |\\n| Follower 2 | 2 | 6 | 3 | 4 | 2 | 6 | 3 | 4 |\\n| Follower 3 | 6 | 3 | 4 | 2 | 6 | 3 | 4 | 2 |\\n\\nThe leader first selects arms in $\\\\mathcal{M} _ 1^4$ with followers from $t _ 1$ to $t _ 4$. After pulling all arms in $\\\\mathcal{M} _ 1^4$, she begins to select other arms in $\\\\mathcal{K}\\\\backslash \\\\mathcal{M} _ 1^4$. The arm selection in one exploration continues for $KM\\\\log(T)$ times. Note that the leader gradually eliminates sub-optimal arms, so $\\\\mathcal{K}$ is shrinking. \\n\\n# Response to Question 3\\nWe consider the scenario of $K \\\\geq M$ which ensures that every player can find at least one arm without collision. Since there are $M$ players, we have $M$ optimal arms and $K-M$ sub-optimal arms. Our goal is to minimize regret which is defined as:\\n\\n$$\", \"r___t\": \"= T\\\\sum _ {j\\\\in[M]}\\\\mu _ {(j)} - \\\\mathbb{E}\\\\left[ \\\\sum _ {t=1}^{T} \\\\sum _ {j\\\\in[M]} r^j(t) \\\\right],\\n$$\\n\\nwhere $\\\\mu _ {(j)}$ is $j$-th order statistics of $\\\\mu$, i.e., $\\\\mu _ {(1)} \\\\geq \\\\mu _ {(2)} \\\\geq ... \\\\geq \\\\mu _ {(K)}$. In other words, $R _ T$ is the accumulated regret of all players. The reward expectation $\\\\mu _ {(k)}$ of arm $k$ is the same for different players. The optimal solution is that players select the first $M$ optimal arms in a staggered manner. Within these first $M$ optimal arms, it does not matter which arm each player chooses, as long as they do not collide.\\n\\nIn the scenario, if we only have two arms, players $1$ and $2$ select these two arms separately because they do not have other choices. When $K>2$, if arm $2$ is optimal (its reward expectation is lower than arm $1$ but higher than other arms), players $1$ and $2$ select arm $1$ and $2$, or arm $2$ and $1$. If arm $2$ is not an optimal arm, players should find other optimal arms.\\n\\n# Response to Question 4\\nAs said in response to Q3, the reward expectation $\\\\mu _ {(k)}$ of arm $k$ is the same for different players, so $\\\\Delta _ k$ is dependent on arm $k$. \\n\\nThank you again for your insightful questions. Please feel free to reach out with any further queries or suggestions.\"}", "{\"title\": \"Response to Reviewer WghT (Part 1)\", \"comment\": \"Thank you for your comments about our manuscript. We have studied the comments carefully, and find them valuable for improving our paper. The responses to your comments are as follows.\\n# Response to Weakness 1:\\n*Q: Why set the length of each communication phase as\\u00a0$K+2M$? If the length of communication phase becomes time-varying, will the methods in this paper still apply?*\\n\\nEach communication phase is composed of three blocks. \\n## The first block is used for removing an arm from $\\\\mathcal{M}^j _ p$ . \\nWe first explain the case that players do not delay the update of $\\\\mathcal{M}^j _ p$. The leader continuously selects the arm to be removed for $M$ times. Followers selects arms in $\\\\mathcal{M}^j _ {p}$. Because the length of $\\\\mathcal{M}^j _ p$ is $M$, during the process of round-robin selection, each follower will collide with the leader once. The arm that generates a collision will be removed from $\\\\mathcal{M}^j _ p$.\\n\\nIf the update of $\\\\mathcal{M}^j _ p$ is delayed, i.e. players should use $\\\\mathcal{M}^j _ {p-q _ j}$ instead of $\\\\mathcal{M}^j _ {p}$. During this block, we still hope to pass the information that **an arm in $\\\\mathcal{M}^j _ p$ should be removed.** Therefore, the leader firstly finds the bad arm $a^- _ p$ in $\\\\mathcal{M}^j _ p$. If the leader directly selects $a _ p^-$, followers will get the wrong information because they are selecting arms in $\\\\mathcal{M}^j _ {p-q _ j}$ which is not the same with $\\\\mathcal{M}^j _ p$. We consider the example with $M=4$ and $K=6$: \\n\\n\\\\begin{aligned} \\n\\\\mathrm{Leader:} \\\\ &\\\\mathcal{M}^4_1 = \\\\\\\\{4, 2, 6, 3\\\\\\\\}, \\\\mathcal{M}^4_2 = \\\\\\\\{4, 2, 6, 5\\\\\\\\} , \\\\mathcal{M}^4_3 = \\\\\\\\{4, 2, 6, 1\\\\\\\\} \\\\\\\\\\\\\\\\\\n\\\\mathrm{Follower\\\\ 1:} \\\\ &\\\\mathcal{M}^1_1 = \\\\\\\\{4, 2, 6, 3\\\\\\\\} \\\\\\\\\\\\\\\\\\n\\\\mathrm{Follower\\\\ 2:} \\\\ &\\\\mathcal{M}^2_1 = \\\\\\\\{4, 2, 6, 3\\\\\\\\} \\\\\\\\\\\\\\\\\\n\\\\mathrm{Follower\\\\ 3:} \\\\ &\\\\mathcal{M}^3_1 = \\\\\\\\{4, 2, 6, 3\\\\\\\\}\\n\\\\end{aligned}\\n\\nIn this example, the leader wants to remove arm $5$ from $\\\\mathcal{M}^4 _ 2$. Note that followers do not know $\\\\mathcal{M}^j _ 2$ and can only select arms in $\\\\mathcal{M}^j _ 1$ in the communication phase. If the leader simply selects arm $5$ in this communication phase, followers will not receive the collision because they just select arm ${4,2,6,3}$. This will lead to missing information from the communication phase.\\n\\nTo avoid this situation, the leader also finds the position of $a^- _ p$ . We define the position of $a^- _ p$ as $i _ {a^- _ p}$. Then the leader selects the $i _ {a^- _ p}$ -th arm in $\\\\mathcal{M}^j _ {p-q _ j}$ for $M$ times. Followers still select arms in $\\\\mathcal{M}^j _ {p-q _ j}$ in a round-robin way. After a collision occurs, followers save the position that a collision happens. Then when she gets both the position of the arm to be removed and the arm to be added, she will update the best arm set. In this example, the leader finds that the position of arm $5$ is $4$, so she selects the $4$-th arm in $\\\\mathcal{M}^4 _ 1$, i.e. arm $3$, for $M$ times. Followers select arm $\\\\{4,2,6,3\\\\}$ in a round-robin way. The detailed arm selection is in the table.\\n\\n| | $t _ 1$ | $t _ 2$ | $t _ 3$ | $t _ 4$ |\\n| ---------- | ----- | ----- | ----- | ----- |\\n| Leader | 3 | 3 | 3 | 3 |\\n| Follower 1 | 4 | 2 | 6 | 3 |\\n| Follower 2 | 2 | 6 | 3 | 4 |\\n| Follower 3 | 6 | 3 | 4 | 2 |\\n\\nEach follower collides on arm $3$ once. They remember the position of collision in $p=2$ is $4$. \\n\\n## The second block is used for adding an arm from $\\\\mathcal{M}^j _ p$.\\nThe leader finds a better arm that has higher empirical rewards but not in $\\\\mathcal{M}^j _ p$. Then she wants to pass the new arm to followers. Because the new arm $a^+ _ p$ might not be in $\\\\mathcal{M}^j _ p$ or $\\\\mathcal{M}^j _ {p-q _ j}$, we can not let followers receive the collision information via the best arm set. Thus, we utilize the whole arm set. The leader continuously selects $a^+ _ p$ for $K$ times. Followers select all arms in a round-robin way. This block continues $K$ times because the length of the original arm is $K$ and we hope to use the original whole arm set to pass the information.\\nIn the example, $t _ 5$ to $t _ 6$ denotes the rounds in the second block. We suppose that the leader wants to add arm $1$.\\n\\n| | $t _ 5$ | $t _ 6$ | $t _ 7$ | $t _ 8$ | $t _ 9$ | $t _ {10}$ |\\n| ---------- | ----- | ----- | ----- | ----- | ----- | -------- |\\n| Leader | 5 | 5 | 5 | 5 | 5 | 5 |\\n| Follower 1 | 1 | 2 | 3 | 4 | 5 | 6 |\\n| Follower 2 | 2 | 3 | 4 | 5 | 6 | 1 |\\n| Follower 3 | 3 | 4 | 5 | 6 | 1 | 2 |\\n\\nEach follower collides on arm $5$ once. They remember that the collision happens on arm $5$.\"}", "{\"title\": \"Response to Reviewer WghT (Part 2)\", \"comment\": \"## The third block is used for passing the ending of exploration.\\nLeader explores and gradually eliminates all sub-optimal arms from $[K]$. However, as for followers, they do not know when the exploration of the leader ends. If the leader's exploration has ended and passes all optimal arms to followers, continuing to enter the communication phase is a waste of time for followers. Therefore, a block used for passing the ending of exploration is necessary. \\n\\nIn this block, we can utilize $\\\\mathcal{M}^j _ {p-q _ j}$ to pass information. The length of $\\\\mathcal{M}^j _ {p-q _ j}$ is $M$, so this block continues for $M$ times. In multi-player bandits, each player has her optimal arm so we have $M$ optimal arms in total. When the leader eliminates all sub-optimal arms, i.e., there are $M$ optimal arms left, she will send collisions in the third block of the next communication phase. Otherwise, she does not send collisions in this block.\\n\\n## After a period of delay:\\nNote that the collisions in the first and second blocks can not be received immediately. After a period of delay, follower $j$ receives the position $4$ and the arm to be added is $5$. She also knows the feedback is from phase $2$. Therefore, she updates:\\n$$\\n\\\\mathcal{M} _ 1^j = \\\\\\\\{4, 2, 6, 3\\\\\\\\} \\\\rightarrow \\\\mathrm{replace\\\\ the\\\\ arm\\\\ in\\\\ position\\\\ 4\\\\ to\\\\ arm\\\\ 5} \\\\rightarrow\\\\mathcal{M}^j_2 = \\\\\\\\{4, 2, 6, 5\\\\\\\\} \\\\\\\\ \\n$$\\nTherefore, although players use the previous arm set $\\\\mathcal{M} _ 1^j$, followers can still receive the correct update information of $\\\\mathcal{M}_2^j$ even though they do not know $\\\\mathcal{M} _ 2^j$. \\n\\nIn summary, the length of $K+2M$ is enough short to pass the information between players. There is no need to design a time-varying length.\\n\\n# Response to Weakness 2:\\n*Q: The paper lacks an analysis of the lower bound for decentralized algorithm, which should be the main focus of the paper.*\\n\\nIn decentralized multi-player bandits, players intentionally collide with others to simulate communication, which inevitably results in some regret. Therefore, our goal is to minimize the communication duration and the associated regret. To evaluate this, we compare our results with the centralized lower bound to evaluate how the additional information exchange impacts regret reduction. The goal of decentralized multi-player bandits is to reach the same performance as in the centralized setting. [1, 2, 3] study decentralized multi-player bandits and compare their results with centralized lower bound.\\n\\n# Response to Weakness 3:\\n\\u00a0*Q: In larger scenarios of experiments, DDSE without delay estimation performs better than DDSE. What is the significance of considering delay estimation in delay estimation algorithms in large-scale scenarios?*\\n\\nWe have updated our experimental results. In all of these experiments, only Figure 8(d) shows that DDSE is slightly worse than DDSE without delay estimation. As the red words explained in Appendix B, the interval of each communication phase is $KM\\\\log(T)$, which is large when $K$ and $M$ increase. A large interval makes sure that followers receive feedback from a communication phase before the next communication phase begins. \\n\\nHowever, in large-scale scenarios where the delay is very long, i.e., exceeding $KM\\\\log(T)$, players might update the communication results wrongly so that DDSE without delay estimation has a large fluctuation. The fluctuation is also validated in Figure 9(d) where DDSE without delay estimation has a large standard error. In real-world cognitive networks, we can not know the maximum delay in advance. It is impossible to adjust the interval manually. As nodes in cognitive networks increase, real-world delay is also increasing due to the congestion of the network. It is hard to guarantee that a large $K$ or $M$ is enough to balance the delay. \\n\\nTherefore, DDSE does not need to adjust the interval of each communication phase. No matter what $K$ or $M$ is, and no matter how large the delay is, DDSE always shows a good performance and has a better guarantee. Experiments also show that DDSE is more stable than DDSE without delay estimation. In real-world applications, DDSE is more robust and can adapt to complicated cognitive networks.\"}", "{\"title\": \"Response to Reviewer N47W (Part 1)\", \"comment\": \"Thank you for your comments about our manuscript. We have studied the comments carefully, and find them valuable for improving our paper. The responses to your comments are as follows.\\n# Response to Weakness 1:\\n*Q: The literature review is not detailed.*\\n\\nThank you for your valuable feedback. We have expanded the literature review to include a discussion of several papers that address multi-player bandits with more general heterogeneous rewards. Please refer to Appendix A for the updated related works section.\\n\\n# Response to Question 1:\\n*Q: What is the duration of exploration, communication, and exploitation?*\\n\\nThe length of a communication phase is $K+2M$. The total duration of exploration is in Equation (14). Players are in exploration phase. They enter a communication phase every $KM\\\\log(T)$ times. After the leader eliminates all sub-optimal arms and communicates the ending signal of exploration in the next communication phase, She begins the exploitation phase and continuously selects her arm until $T$. As for followers, they select arms in $\\\\mathcal{M}^j _ {p-q _ j}$ and enter the communication phase every $KM\\\\log(T)$ to receive the collision from the leader. After they receive a collision from the third block from a certain communication phase, they save the number of the final communication phase as $p _ {\\\\max}$. Then followers will no longer enter a communication phase at the left time. All they need to do is select arms in $\\\\mathcal{M} _ {p _ {\\\\max}-q _ j}$. With time going by, $q _ j=0$ and followers select arms in the final best arm set, meaning that they are in exploitation. \\n\\n# Response to Question 2:\\n*Q: Line 190 says, \\\"the best empirical set arm set of player j.\\\" How is this set defined?*\\n\\nWe define $\\\\mathcal{M} _ {p}^j$ as the best empirical arm set of player $j$ at phase $p$. In the beginning, the set is randomly initialized. As the leader explores, she gets empirical rewards of arms and ranks these arms with their rewards. The $M$-th first should be put into the set. After finding an update, the leader passes the information to followers in the communication phase. \\n\\n# Response to Question 3:\\n*Q: How is it ensured that the best empirical arm of leader and follower do not overlap in the exploration phase? How is the collision avoided?*\\n\\nTo ensure sufficient exploration, the leader should explore all arms. However, followers are selecting arms in $\\\\mathcal{M} _ {p-q _ j}^j$ in a round-robin way. Thus, we have a sophisticated design for leader's arm selection. We first give an example with $K=8$ and $M=4$:\\n\\n\\\\begin{aligned} \\n\\\\mathrm{Leader:} \\\\ &\\\\mathcal{M}^4_1 = \\\\\\\\{4, 2, 6, 3\\\\\\\\} \\\\quad \\\\mathrm{to} \\\\quad \\\\mathcal{M}^4_2 = \\\\\\\\{4, 2, 6, 8\\\\\\\\} \\\\\\\\\\\\\\\\ \\n\\\\mathrm{Follower\\\\ 1:} \\\\ &\\\\mathcal{M}^1_1 = \\\\\\\\{4, 2, 6, 3\\\\\\\\} \\\\\\\\\\\\\\\\\\n\\\\mathrm{Follower\\\\ 2:} \\\\ &\\\\mathcal{M}^2_1 = \\\\\\\\{4, 2, 6, 3\\\\\\\\} \\\\\\\\\\\\\\\\\\n\\\\mathrm{Follower\\\\ 3:} \\\\ &\\\\mathcal{M}^3_1 = \\\\\\\\{4, 2, 6, 3\\\\\\\\}\\n\\\\end{aligned}\\n\\nIn this example, we suppose that they are focusing on $\\\\mathcal{M} _ 1^j$. They select arms as follows:\\n\\n| Player | $t_1$ | $t_2$ | $t_3$ | $t_4$ | $t_5$ | $t_6$ | $t_7$ | $t_8$ |\\n| ---------- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- |\\n| Leader | 3 | 4 | 2 | 6 | 1 | 5 | 7 | 8 |\\n| Follower 1 | 4 | 2 | 6 | 3 | 4 | 2 | 6 | 3 |\\n| Follower 2 | 2 | 6 | 3 | 4 | 2 | 6 | 3 | 4 |\\n| Follower 3 | 6 | 3 | 4 | 2 | 6 | 3 | 4 | 2 |\\n\\nThe leader first selects arms in $\\\\mathcal{M} _ 1^4$ with followers. After pulling all arms in $\\\\mathcal{M} _ 1^4$, she begins to select other arms in $\\\\mathcal{K}\\\\backslash \\\\mathcal{M} _ 1^4$. (Here $\\\\mathcal{K}$ denotes the active arm set, to avoid the abuse of $|[K]|$.) The process continues for $KM\\\\log(T)$ rounds.\\n\\nIn our supplemental material, you can find the related code in \\\"$\\\\mathtt{ReviewCode/ours/ddse.py}$\\\". We have implemented the process of arm selection in the function \\\"$\\\\mathtt{play()}$\\\".\"}", "{\"comment\": \"Thanks for your feedback. We will improve it in next version.\"}", "{\"title\": \"Response to Reviewer ix9S (Part 1)\", \"comment\": \"Thank you for your comments about our manuscript. We have studied the comments carefully, and find them valuable for improving our paper. The responses to your comments are as follows:\\n# Response to Weakness 1:\\n*Q: Can the paper provide any other relevant examples except cognitive radio?*\\n\\n## Autonomous Vehicles in Traffic Management:\\nWhen multiple autonomous vehicles (players) choose the same lane or intersection (arm) at the same time, traffic congestion or collisions can occur. By applying our algorithm, route optimization and intersection management can be significantly improved, leading to fewer collisions and smoother traffic flow.\\n## Resource Scheduling in Cloud Computing:\\nIn cloud computing, users (players) often compete for access to the same virtual machines or servers (arms), which can create resource bottlenecks and degrade performance. Our algorithm dynamically allocates tasks to available resources, effectively reducing conflicts and improving system efficiency.\\n# Response to Weakness 2:\\n*Q: The paper is very hard to read, hence, the contributions are obscure.*\\n\\nThank you sincerely for the thoughtful comments that clarity could be improved. We have revised our manuscript by refining notations, enhancing the descriptions of the algorithms, and improving the presentation of the proofs. Below, we summarize our main contributions for better clarity:\\n1. We propose a novel framework for multi-player multi-armed bandits (MMAB) with delayed feedback. To the best of our knowledge, we are the first to address delay in MMAB settings where selecting the same arm results in collisions.\\n\\n2. To tackle this challenge, we introduce the DDSE (Decentralized Delayed Successive Elimination) algorithm. In DDSE, players coordinate to utilize the same best empirical arm set, determined based on their delay estimations, before each exploration phase. This ensures that collisions are effectively avoided.\\n\\n3. We derive a regret bound of DDSE, and study the regret bound of DDSE in centralized setting. We also derive a centralized lower bound in MMAB with delayed feedback. Compared with the lower bound, the regret of DDSE is near-optimal.\\n\\n4. Finally, we validate the efficacy of DDSE through numerical experiments conducted on both synthetic and real-world datasets.\\n\\nWe describe our main algorithm and its proof here.\\n## Algorithm: DDSE\\nOur algorithm is composed of exploration and communication. Because the environment is decentralized, we design this communication phase where players can pass implicit information using intentional collisions to simulate communication. The player with ID $M$ becomes the leader and others are followers.\\n\\n**Exploration**\\n\\nIn exploration, the leader explores all active arms while followers only select arms from a specific best arm set. The set of followers is updated when they receive information from the leader in communication phase. Due to the decentralized environment, each player may have a different best arm set. Thus, we define $\\\\mathcal{M} _ p^j$ as the best arm set of player $j$ at phase $p$. Note that the leader sends the information to followers only after she knows the results, but the information that followers receive is delayed for an unknown time. Thus, after every communication phase ends, the updated $\\\\mathcal{M} _ p^M$ is different with $\\\\mathcal{M} _ p^j, j<M$. Different best arm sets lead to: (1) collision in the exploration phase, (2) passing wrong information in the communication phase. To avoid this situation, we introduce our method of delay estimation. The intuition is, that although the current best arm set is different, players can select arms in the previous best arm set which has been received completely. Lemma 1 and Lemma 2 guarantee that players select the same best arm set so that no collision occurs in exploration phase. \\n\\n**Communication**\\n\\nThe goal of communication phase is to pass the update of $\\\\mathcal{M} _ p^j$. Each communication phase is composed of three blocks. We consider an example with $M=4$ and $K=6$ to better understand the algorithm. We suppose that players are in phase $2$. They use the previous arm set $\\\\mathcal{M} _ 1^j$. The leader wants to update to $\\\\mathcal{M} _ 3^4$.\\n\\n\\\\begin{aligned} \\n\\\\mathrm{Leader:} \\\\ &\\\\mathcal{M}^4_1 = \\\\\\\\{4, 2, 6, 3\\\\\\\\}, \\\\mathcal{M}^4_2 = \\\\\\\\{4, 2, 6, 1\\\\\\\\}, \\\\mathcal{M}^4_3 = \\\\\\\\{4, 2, 6, 5\\\\\\\\} \\\\\\\\\\\\\\\\ \\n\\\\mathrm{Follower\\\\ 1:} \\\\ &\\\\mathcal{M}^1_1 = \\\\\\\\{4, 2, 6, 3\\\\\\\\} \\\\\\\\\\\\\\\\\\n\\\\mathrm{Follower\\\\ 2:} \\\\ &\\\\mathcal{M}^2_1 = \\\\\\\\{4, 2, 6, 3\\\\\\\\} \\\\\\\\\\\\\\\\\\n\\\\mathrm{Follower\\\\ 3:} \\\\ &\\\\mathcal{M}^3_1 = \\\\\\\\{4, 2, 6, 3\\\\\\\\}\\n\\\\end{aligned}\"}" ] }
4muXQ5r8Ol
A-Bench: Are LMMs Masters at Evaluating AI-generated Images?
[ "Zicheng Zhang", "Haoning Wu", "Chunyi Li", "Yingjie Zhou", "Wei Sun", "Xiongkuo Min", "Zijian Chen", "Xiaohong Liu", "Weisi Lin", "Guangtao Zhai" ]
How to accurately and efficiently assess AI-generated images (AIGIs) remains a critical challenge for generative models. Given the high costs and extensive time commitments required for user studies, many researchers have turned towards employing large multi-modal models (LMMs) as AIGI evaluators, the precision and validity of which are still questionable. Furthermore, traditional benchmarks often utilize mostly natural-captured content rather than AIGIs to test the abilities of LMMs, leading to a noticeable gap for AIGIs. Therefore, we introduce **A-Bench** in this paper, a benchmark designed to diagnose *whether LMMs are masters at evaluating AIGIs*. Specifically, **A-Bench** is organized under two key principles: 1) Emphasizing both high-level semantic understanding and low-level visual quality perception to address the intricate demands of AIGIs. 2) Various generative models are utilized for AIGI creation, and various LMMs are employed for evaluation, which ensures a comprehensive validation scope. Ultimately, 2,864 AIGIs from 16 text-to-image models are sampled, each paired with question-answers annotated by human experts. We hope that **A-Bench** will significantly enhance the evaluation process and promote the generation quality for AIGIs.
[ "Large multi-modal models", "AI-generated images", "Benchmark" ]
Accept (Poster)
https://openreview.net/pdf?id=4muXQ5r8Ol
https://openreview.net/forum?id=4muXQ5r8Ol
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zreYuaALkL", "yFT21ScSP9", "vVOmZ7KvBu", "toOjLoU0gV", "reu0jRl7A2", "qly712uEDo", "qUuvrklmyN", "fhceFAee74", "cjPF6XDHp4", "cUqow9jOlN", "ZMIYFroFzM", "XCQQO4G89N", "SSIPVpi7TW", "M4lllZnVa2", "KYWNU1YaJ6", "K5Iy85aIL6", "GBjDEAuBuS", "EGXYpj5tMb", "7njLuF9TMF", "70wNGYXNJf" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "decision", "official_review", "official_comment", "official_comment", "meta_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1731846519506, 1731846641282, 1731846654096, 1730617000118, 1731846612572, 1731052051248, 1737523789837, 1730973012634, 1731846590609, 1731846540642, 1734593105564, 1730263474545, 1731846445423, 1732223252585, 1732603950919, 1732604198340, 1731846557742, 1731846484524, 1732322189287, 1731846602414 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission6761/Authors" ], [ "ICLR.cc/2025/Conference/Submission6761/Authors" ], [ "ICLR.cc/2025/Conference/Submission6761/Authors" ], [ "ICLR.cc/2025/Conference/Submission6761/Reviewer_vuEQ" ], [ "ICLR.cc/2025/Conference/Submission6761/Authors" ], [ "ICLR.cc/2025/Conference/Submission6761/Reviewer_8kKW" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission6761/Reviewer_WKMw" ], [ "ICLR.cc/2025/Conference/Submission6761/Authors" ], [ "ICLR.cc/2025/Conference/Submission6761/Authors" ], [ "ICLR.cc/2025/Conference/Submission6761/Area_Chair_QHpv" ], [ "ICLR.cc/2025/Conference/Submission6761/Reviewer_AWPs" ], [ "ICLR.cc/2025/Conference/Submission6761/Authors" ], [ "ICLR.cc/2025/Conference/Submission6761/Reviewer_AWPs" ], [ "ICLR.cc/2025/Conference/Submission6761/Authors" ], [ "ICLR.cc/2025/Conference/Submission6761/Authors" ], [ "ICLR.cc/2025/Conference/Submission6761/Authors" ], [ "ICLR.cc/2025/Conference/Submission6761/Authors" ], [ "ICLR.cc/2025/Conference/Submission6761/Reviewer_vuEQ" ], [ "ICLR.cc/2025/Conference/Submission6761/Authors" ] ], "structured_content_str": [ "{\"title\": \"Official Responses to Reviewer 8kKW\", \"comment\": \"**6. Meaning of the findings**\\n\\nThank you for your concern. **While it may seem obvious that closed-source LMMs generally outperform open-source ones, the specific gaps and their underlying causes are worth exploring.** For instance, the best-performing open-source model in `Basic Recognition` is actually quite close to the closed-source models. However, the difference becomes more pronounced in tasks like `Composition Identification` and `Number of Objects Counting`, highlighting areas where open-source models still have room for improvement. If these open-source models are used as evaluators for AIGI, their lower performance in these dimensions could explain why they underperform compared to closed-source models. **Therefore, while the general finding may appear obvious, the detailed comparison in A-Bench offers valuable insights into the specific strengths and weaknesses of these models.** We will also emphasize these points in the discussion section to highlight our contributions.\\n\\nRegarding your point that \\\"LMMs' insufficient perception of distortion has already been mentioned in works like Q-Bench,\\\" we would like to clarify that **this is only a part of our conclusion in A-Bench-P2**. We further discuss **how most LMMs exhibit their weakest performance** in the `Generative Distortion Assessment` subcategory. Additionally, we highlight an interesting finding: while humans typically perform better in` Technical Quality Perception` compared to `Aesthetic Quality Evaluation`, LMMs show similar performance levels in both subcategories\\u2014an analysis not covered in Q-Bench. Therefore, relying solely on this familiar conclusion to dismiss the more detailed contributions in our later discussion is not entirely appropriate. We hope for your understanding.\\n\\n**7. Difference Between AIGI tasks and Conventional Cognition Tasks**\\n\\nThank you for your question. **The key distinction is that traditional cognition tasks are designed to assess a model's understanding capabilities, whereas A-Bench focuses on using cognition tasks to diagnose potential issues in LMM evaluation, specifically related to AIGI generation**. As mentioned in our response to the fourth point, our proposed **High-level Semantic Question Answering is closely tied to AIGI evaluation, particularly in alignment assessment.** By evaluating LMM performance across different semantic dimensions in AIGI, we aim to identify evaluation challenges and suggest possible areas for improvement.\\n\\n**8. Differences Between the Low-level Perceptual Aspects of AIGI and Some Earlier Works**\\n\\nThank you for your question. The key difference between the **low-level perceptual aspects** in A-Bench and previous works such as Q-Bench [17] and DepictQA [18] lies in the focus and optimization for AIGI tasks. Previous works primarily focus on **general perceptual dimensions in traditional Image Quality Assessment (IQA), often targeting natural images, and do not specifically optimize for AIGI low-level evaluation.** For instance, Q-Bench broadly categorizes low-level perceptual aspects into `distortions` and `other attributes`, without addressing AIGI-specific issues.\\n\\nIn contrast, **A-Bench systematically decouples low-level perceptual evaluation into three distinct areas: `technical`, `aesthetic`, and `generative distortion`.** The design of `technical` and `aesthetic` dimensions is based on the fact that AIGIs share certain common low-level aspects with traditional IQA in these areas. However, in particular, A-Bench **places a strong emphasis on generative distortion**, which includes issues like generative blur (typically caused by incomplete generation, distinct from traditional motion blur or compression artifacts), confusing geometric structures, and unnaturalness. We have specifically designed question-answer pairs to assess these generative distortions. This focus marks the most significant distinction between A-Bench and earlier works in terms of low-level perceptual evaluation.\"}", "{\"title\": \"Official Response to Reviewer AWPs\", \"comment\": \"First, we would like to thank the reviewer for the constructive and valuable feedback. We have addressed your concerns point-by-point below.\\n\\n**1. How LMMs Reason Before Giving the Choice**\\n\\nThanks for pointing out the importance of verifying whether LMMs truly understand their choice. Following your suggestions, we randomly sample a small subset which consists of 200 question-answer pairs for validation. Specifically, we modify the question query into:\\n\\n```\\n#User: [Question] [Image Token]\\n[Choice A] [Choice B] [Choice C] [Choice D] \\nAnswer with the option\\u2019s letter from the given choices and give the reasons for why you choose this option.\\n```\\n\\nThe prompt includes the instruction **give the reasons for why you choose this option** to encourage the LMMs to explain their choices, allowing us to assess whether they truly understand the question or are merely guessing. Considering the complexity of LMM responses, introducing additional LMMs as judges could introduce bias; therefore, we conducted a **user study to evaluate the alignment of the LMMs' reasoning processes with their choices.**\\nThis evaluation is only applied when the LMMs provide the correct option. Specifically, we present observers with the image, question-answer pair, and the corresponding LMM response, asking them to judge whether the LMM\\u2019s choice and reasoning are aligned. If aligned, the response is scored 1, if not, it is scored 0. Each evaluation involves two observers. If their judgments match, the result is recorded, if they disagree, a third observer arbitrates, and the majority decision is recorded. We select some LMMs (GPT-4o, Gemini 1.5 Pro, CogVLM2-19B, LLaVA-NeXT-8B) with good performance on A-Bench for this experiment. The results are illustrated below:\\n\\n| LMM | Basic Recognition | Bag-of-Words | Outside Knowledge | Technical | Aesthetic | Generative | Overall |\\n|-----------------|-------------------|--------------|-------------------|-----------|-----------|------------|---------|\\n| GPT-4o | 0.95 | 0.93 | 0.93 | 0.82 | 0.85 | 0.80 | 0.90 |\\n| Gemini 1.5 Pro | 0.94 | 0.92 | 0.91 | 0.87 | 0.81 | 0.79 | 0.89 |\\n| CogVLM2-19B | 0.93 | 0.93 | 0.91 | 0.82 | 0.79 | 0.74 | 0.86 |\\n| LLaVA-NeXT-8B | 0.92 | 0.94 | 0.92 | 0.81 | 0.81 | 0.76 | 0.86 |\\n\\nFrom the results, we observe that in terms of semantic understanding, LMMs generally grasp the content of the questions well, with their reasoning process aligning closely with the chosen options. However, in areas related to visual quality understanding\\u2014where their performance is already weaker\\u2014some correct answers do not align with the reasoning, indicating a few cases of correct guesses. Nevertheless, the overall alignment accuracy remains acceptable, and the LMMs show relatively stable performance across questions within the same category. This suggests that A-Bench testing is still reasonable, accurate, and meaningful.\\n\\n\\n**2. LMMs' Sensitivity to Content**\\n\\nThanks for your comment. We further conduct a content sensitivity experiment based on your suggestions. We split the content into **Human**, **Animal**, and **Objects**, then we calculate the performance of each content type for illustration.\\n\\n| LMM | Human | Animal | Objects |\\n|-----------------|-------------------|--------------|--------------|\\n| GPT-4o | 0.71 | 0.66 | 0.80 |\\n| Gemini 1.5 Pro | 0.70 | 0.63 | 0.79 |\\n| CogVLM2-19B | 0.68 | 0.64 | 0.74 |\\n| LLaVA-NeXT-8B | 0.60 | 0.55 | 0.69 |\\n\\nInterestingly, our findings indicate that LMMs perform best on images with **Objects** content, while they struggle more with images featuring **Animal** content. This may be because objects tend to be simpler and more straightforward, making them easier for the models to understand. In contrast, animals often exhibit unusual or complex generated structures and patterns, which can negatively impact the accuracy of LMMs' understanding.\"}", "{\"title\": \"Official Response to Reviewer AWPs\", \"comment\": \"**3. Missing Definition for SRCC/PLCC**\\n\\nWe apologize for the missing definitions of **SRCC** and **PLCC** and appreciate you bringing this to our attention. Spearman's Rank Correlation Coefficient (SRCC) and Pearson's Linear Correlation Coefficient (PLCC) are widely used metrics for evaluating the correlation between predicted scores and ground truth scores in tasks like quality assessment. \\n\\n**SRCC** measures the rank-based correlation between two variables. Instead of comparing the raw values, it assesses the relationship based on the relative ordering (ranks) of the scores.\\n\\n**PLCC** measures the linear relationship between two continuous variables. Unlike SRCC, it uses raw values rather than ranks, calculating how well one set of scores can predict another through a linear relationship.\\n\\n**4. Different Instruction Prompts for Different LMMs**\\n\\nThank you for your question\\uff0c we\\u2019re happy to clarify. The default prompt used in A-Bench may cause certain LMMs to struggle with selecting the correct option. For instance, when testing BakLLava-7B, it occasionally outputs irrelevant text instead of selecting from the options. However, when we modify the prompt from **Answer with the option\\u2019s letter directly from the given choices** to **Please tell me the choice for the correct answer**, these errors occur much less frequently. Through multiple trials, we found that while our default prompt generally provides stable instructions for LMM responses, some LMMs occasionally need slight adjustments. These modifications are minor and infrequent, so their overall impact on results is minimal.\\n\\n\\n**5. Detailed Information About Human Annotation**\\n\\nThank you for your question.\\nFirst, we issue an invitation to recruit participants familiar with visual quality and AIGI for in-person training. During the training, participants are introduced to the annotation tasks they will perform. They then practice with additional AIGI data prepared for hands-on annotation, following specific requirements. Afterward, we organize expert-led discussions and assessments of the annotations. Those who pass the evaluation are recruited as annotators. Ultimately, fifteen participants successfully complete the training and are selected.\\n\\nTo prevent fatigue, each person is limited to annotating a maximum of thirty entries per day. Each annotated entry has to be reviewed and approved by three other participants for it to be considered valid (with each review also counted as an annotation). On average, each person annotates approximately 750 entries, and the entire annotation process takes about two months to complete.\\n\\nThanks again for your comments on refining our work.\"}", "{\"summary\": [\"Human evaluations are the gold standard for evaluating generative models especially text-to-image (T2I) models. However, they are expensive. An alternative is using automatic metrics and Large Multi-modal Models (LMMs) are a popular choice.\", \"LMMs are trained on real images and AI-generated Images (AIGIs) are out of domain for LMMs questioning their reliability as evaluation models.\", \"This work proposes A-Bench a diagnostic benchmark for assessing the reliability of LMMs for evaluating AIGIs.\", \"A-Bench consists of two subsets 1) A-Bench P1 to evaluate the text faithfulness or prompt adherence of T2I models and 2) A-Bench P2 to evaluate the quality of the generations.\", \"Authors samples 2864 AIGIs from 16 open and closed source T2I models. For each generation, they sourced human experts to annotate question-answer pairs and computed the accuracies of popular proprietary and open-source LMMs.\", \"Authors report that 1) Proprietary LMMs are better than open-source counterparts for text faithfulness, 2) proprietary LMMs perform as well as humans on simple enough prompts and 3) LMMs are not good models to evaluate the generation quality of AIGIs.\"], \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": [\"Authors address a very important problem: Are current LLM/LMMs good enough to be used as judges for generative models? This line of research can provide valuable insights to train better LMMs for understanding AIGIs.\", \"A-Bench along with standard LMM evaluation benchmarks provide a complete picture of an LMMs capability to understand both real and AI generated images.\", \"The paper is well written and very easy to follow containing all the details necessary for reproduction.\", \"The experimental section is exhaustive with comparisons provided for both proprietary and open-source LMMs.\"], \"weaknesses\": [\"I didn't find any major weakness with this work.\"], \"questions\": [\"I would recommend authors check out SelfEval [1] as another source of evidence that external models cannot be reliable for evaluating T2I models. Please discuss it if relevant.\", \"In my experience there is a huge variance to the responses provided by LLMs/LMMs. Did the authors compute variance of the scores or perform any statistical significance studies?\", \"L272 controversial -> counter factual\", \"In the introduction (L112-L117), in my opinion, authors should provide some numbers to make the point that LMMs are still not masters at evaluating AIGIs. Right now authors state that \\\"there remains a considerable gap and significant room for improvement\\\". Instead providing some numbers can make it more straightforward.\", \"[1] Sai Saketh Rambhatla, Ishan Misra, SelfEval: Leveraging the discriminative nature of generative models for evaluation\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Official Response to Reviewer vuEQ\", \"comment\": \"First and foremost, we would like to thank the reviewer for the time and valuable feedback. We are sincerely grateful for the recognition and appreciation expressed. Our point-by-point responses are as follows:\\n\\n**1. Discussion About SelfEval**\\n\\nThanks for your suggestions. We carefully review SelfEval [1] and find it highlights key weaknesses of using external models, such as: ``Evaluation metrics can vary widely depending on the chosen model, impacting reliability. If the same model is used in both training and evaluation, results may be biased, not reflecting true performance. External models often struggle with certain tasks, such as counting or recognizing specific attributes, making their evaluation scores unreliable'' This provides strong evidence that external models may not be reliable for evaluating T2I models.**We will include a discussion of SelfEval in the introduction.** Thank you for the reminder.\\n\\n**2. Responses Variance of LLMs/LMMs**\\n\\nThank you for your question. Your concern is crucial, as the accuracy and stability of the benchmark directly affect the quality of evaluation. Here, we will address this:\\nFirst, we use a consistent prompt instruction format to minimize any misunderstanding by LMMs and standardize the output. Additionally, we set the model's temperature parameter to 0, meaning the LMM's output will no longer be affected by randomness. As a result, the model will give the same response to the same question each time, eliminating variance.\\n\\nIt\\u2019s also worth noting that increasing the model's temperature to encourage more diverse and exploratory answers is indeed an interesting consideration. **To further address your concern about the statistical significance of the experiment, which we believe is crucial and important, we repeat the A-Bench experiment for 5 rounds with different temperature settings across several popular 7B-8B LMMs.** The performance is listed in the table below, with the results presented as the mean accuracy \\u00b1 standard error.\\n\\n| Temperature | DeepSeek-VL-7B | LLaVA-NeXT-8B | LLaVA-v1.5-7B | Qwen-VL-7B |\\n|-------------|----------------|---------------|---------------|-------------|\\n| 0.0 | 66.58\\u00b10.00 | 67.75\\u00b10.00 | 62.97\\u00b10.00 | 60.41\\u00b10.00 |\\n| 0.5 | 65.11\\u00b11.72 | 66.43\\u00b12.09 | 60.61\\u00b12.23 | 58.17\\u00b11.89 |\\n| 1.0 | 62.04\\u00b14.51 | 63.77\\u00b13.86 | 59.22\\u00b14.01 | 55.22\\u00b16.04 |\\n\\nBased on the results, we can observe that when the temperature is set to zero, the accuracy results for all LMMs remain consistent across all 5 rounds. As the temperature increases, the average performance declines and the results become more unstable, with higher standard errors. Therefore, to ensure reproducibility and performance stability, we prefer the **zero-temperature setting**, as it more accurately and reliably reflects the performance of LMMs, making it more suitable for practical applications.\\n\\n**3. Inappropriate Words and Writing Improvement**\\n\\nThanks for your constructive suggestions. We have changed the word `controversial` on L272 into `counter factual`. \\nWe have improved our statement in the introduction that `A substantial **performance gap of 16%** remains between the best-performing LMMs and human evaluators on AIGI assessments, indicating significant room for improvement.'\\n\\nThanks again for your valuable suggestions on improving our work.\\n\\n[1] Sai Saketh Rambhatla, Ishan Misra, SelfEval: Leveraging the discriminative nature of generative models for evaluation\"}", "{\"summary\": \"Due to the existing evaluation models' inability to effectively assess the performance of AIGI tasks, more and more researchers are turning to LMMs for evaluating the quality of generated images. The authors question this approach and design a framework consisting of seven dimensions focused on high-level semantic understanding and low-level quality evaluation to assess the quality of AIGI. By manually annotating 2864 different image quality issues, the authors compare the evaluation performance of multiple open-source and closed-source LMMs and contrast these with human evaluation results, summarizing numerous shortcomings of LMMs in the AIGI quality assessment task.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"1. The authors manually annotated a dataset containing 2864 image quality issues, which contributes to the development of AIGI evaluation.\\n2. The authors evaluate AIGI quality from high-level semantic aspects like counting and low-level aspects like distortion, providing valuable insights for subsequent general AIGI task evaluations.\\n3. The paper's A-Bench includes the evaluation performance of multiple LMMs, offering guidance for researchers who wish to use LMMs for AIGI quality assessment.\", \"weaknesses\": \"1. Although A-Bench includes multiple LMMs, it lacks some of the latest SOTA models. Better models such as QWEN-VL2 and MiniCPMv2.6 can be found from opencompass. The paper does not specify the versions of gpt4o used, such as gpt-4o-2024-08-06 or gpt-4o-2024-05-13, which is crucial for future researchers.\\n2. The AIGI models used to generate the dataset are somewhat outdated, lacking relatively advanced image generation models such as SD3, PixArt, Flux, etc. Currently, the more outstanding AIGI models often embed large language models, which might significantly impact the evaluation conclusions.\\n3. The questions are all manually generated, which is certainly good. However, this makes the evaluation dataset difficult to expand and might lose value as AIGI models rapidly evolve. It would be better if the questions could be designed based on the text prompts of T2I models.\\n4. Compared to previous work, The paper's main contribution, i.e., high-level semantic question answering, is not strongly related to AIGI and does not seem necessary to research specifically in the AIGI context.\\n5. Two-thirds of the low-level semantic question-answering data come from other datasets, reducing the paper's contribution.\\n6. The paper's findings are somewhat unremarkable. It is obvious that closed-source LMMs perform better than open-source ones, and some other findings, such as the LMMs' insufficient perception of distortion, have already been mentioned in works like Q-Bench.\", \"questions\": \"1. Overall, the paper adds the evaluation of LMMs' high-level semantic cognition for AIGI, in addition to previous work using LMMs to assess image generation quality. However, it does not highlight the difference between AIGI tasks and conventional cognition tasks. Could the authors elaborate on this further?\\n2. Generally, the authors focus more on evaluating the perceptual capabilities of LMMs, but these perceptual capabilities are more inclined towards the low-level aspects for AIGI tasks. Could the authors further elaborate on the differences between the low-level perceptual aspects of AIGI and some earlier works?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"summary\": \"The paper introduces a benchmark designed to assess the efficacy of large language models (LLMs) in evaluating AI-generated images (AIGI). As the field increasingly depends on LLMs for this evaluation\\u2014sidestepping the high costs and time commitments of traditional user studies\\u2014quantifying the quality and reliability of LLM-based assessments is essential. While it's generally accepted that LLM evaluations fall short of human assessments, this paper provides a systematic analysis of the performance gap across various LLMs, comparing open-source and closed-source models to human evaluations.\", \"the_benchmark_defines_several_key_metrics_within_two_primary_dimensions\": \"Semantic Reasoning and Quality Perception. Using this framework, the study measures the performance of multiple LLMs, revealing a substantial disparity between human judgment and LLM performance in AIGI evaluation.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The benchmark is undoubtedly useful. Given the growing reliance on LLMs to evaluate various AI-generated content like images, having a comprehensive, quantitative benchmark that assesses the effectiveness of LLMs in evaluation is highly valuable.\\n2. The paper tries to objectively define the underlying metrics of evaluation.\\n3. The benchmark development involved a rigorous process, starting with user studies to establish a baseline, followed by testing various LLMs, which adds credibility and depth to the analysis.\\n4. While the findings align with expectations, quantifying the gap between human and LLM performance is a valuable contribution. It enables the research community to approach improvements in this field with a more data-driven perspective, facilitating measured, progressive advancements.\", \"weaknesses\": \"1. While the metrics cover several important facets of semantic reasoning, they lack a rigorous scientific foundation, raising questions about whether they capture the full scope of semantic understanding as implicitly perceived by humans. Specific dimensions of semantic reasoning, such as cultural nuances, or emotional depth, may be missing from the current metrics, which could impact the holistic evaluation of AI-generated images. As such, while the comparisons of different LLMs using these metrics provide intriguing insights, it remains questionable whether these metrics are robust enough to serve as a truly holistic benchmark for evaluating semantic reasoning in AI-generated images.\\n\\n2. The number of images used (~2,000) feels arbitrary and may be insufficient to capture the nuanced aspects of reasoning and quality perception required for a comprehensive evaluation. Expanding the dataset to around 5,000\\u201310,000 images, with careful attention to diversity across image types and contexts, could improve the robustness of the analysis. Additionally, it would be helpful for the authors to provide a rationale for this dataset size or acknowledge any limitations they faced in scaling up.\", \"questions\": \"1. It would strengthen the paper if the authors could provide scientific backing for the proposed metrics, citing sources that systematically define each measure. Specific areas where additional references might be valuable include the validity of semantic reasoning components and quality perception dimensions, to help ensure that the chosen metrics align with established frameworks in the field.\\n\\n2. Providing further details on the image selection process would clarify the robustness of the benchmark. Specifically, information on the criteria for image selection, the diversity of image types, and the distribution of different content categories would offer valuable context. If possible, outlining how these factors impact the benchmark's representativeness, and validity could further enhance transparency.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Official Response to Reviewer WKMw\", \"comment\": \"**3. Details of the Image Sampling**\\n\\n**3-A) AIGI sampling for A-Bench-P1.**\\nAs mentioned above, **A-Bench-P1** is designed to address the text-alignment issue, so we adopt a manual approach to collect prompts. Specifically, we carefully craft prompts to target key aspects, such as:\\n\\n`Basic Recognition -> Major Object Recognition`: An elaborate treehouse in a thick forest, with children playing inside, rope bridges connecting to other trees, and birds chirping around.\\n\\n`Basic Recognition -> Minor Object Recognition`: A magical fairy ring in a moonlit forest, with tiny glowing fairies dancing and mystical plants all around.\\n\\n`Bag-of-Words -> Attributes Awareness`: A delicate, frosty, crystal snowflake beside a warm, glowing, amber ember on a smooth, slate-gray stone.\\n\\n`Bag-of-Words -> Nouns as Adjectives Awareness`: Shark-sleek submarine exploring ocean depths.\\n\\n`Bag-of-Words -> Composition Identification`: A gamer's setup with consoles and controllers on a desk, multiple screens above, and game boxes and snacks partially obscured beneath the desk.\\n\\n`Bag-of-Words -> Number of Objects Counting`: Six logs in a woodpile, stacked so tightly that they seem to form a solid block.\\n\\n`Outside Knowledge -> Specific Terms Recognition`: A barometer showing a rapid decrease in pressure.\\n\\n`Outside Knowledge -> Contradiction Overcome`: A ship floating above the clouds, sails made of sunlight.\\n\\nTo prove content and context diversity, we calculated the **Text Information Entropy** of the text, which resulted in a score of 5.1, indicating high diversity [17]. Additionally, to ensure that the AIGIs cover a broad range of applications, we utilized 15 different AIGI models to generate the images, randomly sampling one AIGI per text prompt. Sample overview can be seen in Fig.6 of the manuscript.\\n\\n**3-B) AIGI sampling for A-Bench-P2**. A-Bench-P2 is designed for the quality evaluation of AIGIs. Consequently, it is essential to ensure that the collected AIGIs span a wide quality range to address various practical scenarios. For `Technical Quality`, we sample 500 AIGIs from the AIGIQA-20K dataset [18] using a uniform sampling strategy. Specifically, each AIGI in the AIGIQA-20K dataset is assigned a mean opinion score (MOS) for technical quality. We apply **uniform sampling** to create more even distributions, as illustrated in Fig. 7 (in the manuscript). For `Aesthetic Quality`, in the absence of provided aesthetic scores, we utilize q-align [19], an effective quality predictor, to infer the aesthetic values of AIGIs. Subsequently, we perform **uniform sampling** similarly to obtain 500 AIGIs for aesthetic evaluation. For `Generative Distortion`, we manually select 500 AIGIs exhibiting unexpected AIGI-specific distortions. It is important to note that there is no content overlap among the selected AIGIs, which can be overviewed in Fig. 8 (in the manuscript).\"}", "{\"title\": \"Official Responses to Reviewer 8kKW\", \"comment\": \"**References**\\n\\n[1] Lin Z, Pathak D, Li B, et al. Evaluating text-to-visual generation with image-to-text generation[C]//European Conference on Computer Vision. Springer, Cham, 2025: 366-384.\\n\\n[2] Cho J, Hu Y, Garg R, et al. Davidsonian scene graph: Improving reliability in fine-grained evaluation for text-image generation[J]. arXiv preprint arXiv:2310.18235, 2023.\\n\\n[3] Ku M, Jiang D, Wei C, et al. Viescore: Towards explainable metrics for conditional image synthesis evaluation[J]. arXiv preprint arXiv:2312.14867, 2023.\\n\\n[4] Nichol A, Dhariwal P, Ramesh A, et al. Glide: Towards photorealistic image generation and editing with text-guided diffusion models[J]. arXiv preprint arXiv:2112.10741, 2021.\\n\\n[5] Saharia C, Chan W, Saxena S, et al. Photorealistic text-to-image diffusion models with deep language understanding[J]. Advances in neural information processing systems, 2022, 35: 36479-36494.\\n\\n[6] Liu Y, Duan H, Zhang Y, et al. Mmbench: Is your multi-modal model an all-around player?[C]//European Conference on Computer Vision. Springer, Cham, 2025: 216-233.\\n\\n[7] Xu P, Shao W, Zhang K, et al. Lvlm-ehub: A comprehensive evaluation benchmark for large vision-language models[J]. arXiv preprint arXiv:2306.09265, 2023.\\n\\n[8] Chatterjee A, Stan G B M, Aflalo E, et al. Getting it right: Improving spatial consistency in text-to-image models[C]//European Conference on Computer Vision. Springer, Cham, 2025: 204-222.\\n\\n[9] Motamed S, Paudel D P, Van Gool L. Lego: Learning to Disentangle and Invert Personalized Concepts Beyond Object Appearance in Text-to-Image Diffusion Models[J].\\n\\n[10] Wang, Y., Zhang, L., Chen, T., & et al.. (2024). Scene graph disentanglement and composition for generalizable complex image generation. arXiv preprint arXiv:2410.00447.\\n\\n[11] Huang, L., Zhang, Y., Yang, W., & et al.. (2024). IterComp: Iterative composition-aware feedback learning from model gallery for text-to-image generation. arXiv preprint arXiv:2410.07171.\\n\\n[12] Litalby, I., Boulanger, S., & others. (2024). Make It Count: Text-to-Image Generation with an Accurate Number of Objects. arXiv Preprint, 2406.03070.\\n\\n[13] Zhou, Y., Xu, W., & Li, X. (2023). Object Count Generation in Diffusion Models. IEEE Transactions on Neural Networks and Learning Systems, 34(11), 2463-2475.\\n\\n[14] Schwenk D, Khandelwal A, Clark C, et al. A-okvqa: A benchmark for visual question answering using world knowledge[C]//European conference on computer vision. Cham: Springer Nature Switzerland, 2022: 146-162.\\n\\n[15] Vu, H., et al. (2023). WikiContradict: A Benchmark for Evaluating LLMs on Real-World Knowledge Conflicts from Wikipedia. arXiv preprint arXiv:2406.13805.\\n\\n[16] Li C, Kou T, Gao Y, et al. Aigiqa-20k: A large database for ai-generated image quality assessment[J]. arXiv preprint arXiv:2404.03407, 2024, 2(3): 5.\\n\\n[17] Wu H, Zhang Z, Zhang E, et al. Q-bench: A benchmark for general-purpose foundation models on low-level vision[J]. arXiv preprint arXiv:2309.14181, 2023.\\n\\n[18] You Z, Gu J, Li Z, et al. Descriptive image quality assessment in the wild[J]. arXiv preprint arXiv:2405.18842, 2024.\"}", "{\"metareview\": \"Summary\\nThe paper examines whether multimodal models can evaluate image generation models. The authors propose a benchmark, A-Bench, that can evaluate both the text alignment and the image quality of generation models. A-Bench is a diagnostic benchmark, and the authors use different multimodal models. Their key finding is that not all multimodal models can serve as evaluators, and certain close-sourced models are more suited for this task.\\n\\nStrengths\\n1. The paper studies an important problem: easing the evaluation of generation models by using image understanding models.\\n2. The experiments in this paper cover a wide variety of models for both generation and understanding. This comprehensive study is valuable.\\n3. The conclusions drawn in the paper are novel and important for the research community.\\n\\nWeaknesses\\n1. The size of the diagnostic benchmark seems small. While the authors do provide the comparison to MMBench, I believe MMBench is used in conjunction with multiple benchmarks. If the authors intend such a use for A-Bench, it would be good to clarify in the paper.\\n2. Unexplained abbreviations as also pointed out by one reviewer.\\n\\nSuggestions\\nMaybe using a stronger Llama image understanding model can strengthen the paper ? The closed sourced systems on GPT and Gemini are hard to use at scale for many researchers.\\n\\nJustification\\nThis is a well written paper that studies an important problem. Technically sound and well executed.\", \"additional_comments_on_reviewer_discussion\": \"The authors engaged with the reviewers to address questions. 2 of the reviewers didn't engage with the authors, but the AC has read through the reviews and believes that their concerns were already answered.\"}", "{\"summary\": \"This paper study the multimodal LLM's ability in the context of image evaluation. Instead of studying the effectiveness of certain LLM-based metrics, This work aims to identify whether multimodal LLM (LMM) are truly capable of evaluating AI-generated images through a question answering benchmark. Proposed a benchmark that contains 2,864 set of questions which can be categorized into 6 categories, involving semantic understanding and quality perception. After benchmarking a total of 18 LMMs and comprehensive analysis, the authors came up with a conclusion that LMMs are still not masters at evaluating AI-generated images.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"S1) The paper is well-written and well-organized. All the relevant details are included in the main paper and appendix. The evaluating method also seems rigorous.\\n\\nS2) A good pitch in studying LMMs directly through question answering instead of studying the effectiveness of certain LMM-based metrics. This helped to shed a light on the true capabilities of current LMM-based image evaluation metrics.\", \"weaknesses\": \"W1) The paper would have provided more insights if the authors also studied the reasoning to verify if the LMMs truly understand how to evaluate for each categories (i.e. did the reasoning fully explain the choice made by the LMM?). This might help to explain the gap between the performance of LMMs and humans. I suggest conducting a study on small subset for each categories and see how the reasoning was aligned to the choice made.\\n\\nW2) It would also be desirable to see what kind of images LMM evaluate poorly across each categories in AIGI. A more detailed of diversity analysis on the AIGI dataset is required. E.g. for Basic Recognitions, how much portion of the questions are regarding recognitions of animal, human, or artifacts? Are these LMMs doing poorly on particularly certain type of objects?\", \"questions\": \"Q1) What is SRCC/PLCC in the introduction paragraph? It seems this abbreviation is never explained in the paper.\\n\\nQ2) In section 4.1, \\\"It\\u2019s worth noting that the instruction prompt might slightly differ for different LMMs according to the official setting.\\\" Why the instruction prompt is slight different for different LMMs? How will it impact the performance of LMMs?\\n\\nQ3) For the human annotators, how are they recruited? What kind of training were they given? How many instances are labelled by each human annotator? I am also interested in the total time required to build this benchmark.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Official Responses to Reviewer 8kKW\", \"comment\": \"We would like to thank the reviewer for the time and meaningful comments. First, we would like to kindly clarify a critical point that weakness 5 \\\"Two-thirds of the low-level semantic question-answering data come from other datasets\\\" is a misunderstanding. **All the Question-Answering data in A-Bench is entirely original and created by us and none of it comes from any other existing datasets.** We will then address your concerns point-by-point.\\n\\n**1. Adding Latest SOTA models**\\n\\nThank you for your suggestions. We have further tested five additional models. Please note that **part of the correct answers in the A-Bench dataset are kept private** (to avoid data leak). The updated performance is shown in the table. Additionally, the version of GPT-4o used in the paper is GPT-4o-2024-05-13. For completeness, we have also included GPT-4O-2024-08-06 for comparison.\\n\\n| LMM | Basic Recognition | Bag-of-Words | Outside Knowledge | Technical | Aesthetic | Generative | Overall |\\n|-----------------|-------------------|--------------|-------------------|-----------|-----------|------------|---------|\\n| GPT-4o \\uff082024-08-06\\uff09 | 0.939 | 0.832 | 0.678 | 0.703 | 0.621 | 0.676 | 0.758 |\\n| GPT-4o \\uff082024-05-13\\uff09 | 0.947 | 0.813 | 0.675 | 0.706 | 0.616 | 0.679 | 0.759 |\\n| **Qwen2-VL-72B** | 0.949 | 0.822 | 0.701 | 0.742 | 0.603 | 0.702 | **0.767** |\\n| MiniCPM-V-2.6 | 0.934 | 0.910 | 0.699 | 0.691 | 0.601 | 0.605 | 0.744 |\\n| InternVL2-40B | 0.947 | 0.920 | 0.697 | 0.663 | 0.632 | 0.501 | 0.752 |\\n| Ovis1.5-Llama3-8B | 0.931 | 0.924 | 0.692 | 0.708 | 0.678 | 0.554 | 0.751 |\\n| LLaVA-OneVision-7B | 0.929 | 0.924 | 0.695 | 0.688 | 0.678 | 0.543 | 0.748 | \\n\\n**2. Timeliness of A-Bench**\\n\\nCreating a benchmark involves generating images, collecting data, training evaluators, and verifying data quality, making the process both time-consuming and costly. As a result, it is inevitable that AIGI benchmarks may not always keep pace with the latest technologies or models. However, the insights provided by the benchmark in evaluating AIGI remain valuable and offer useful guidance. We will address this limitation in the discussion section. Of course, we are committed to ongoing updates and expansions to ensure the benchmark remains current. We appreciate your understanding.\\n\\n**3. Question Annotation**\\n\\nThank you for your comment. **While human annotations require significant time and may cause AIGI benchmarks to lag behind the rapid evolution of technology, they remain essential.** Human annotations are critical for AIGI benchmarks because **they provide accurate, consistent, and context-sensitive evaluations that AI models alone cannot achieve**, particularly when assessing subjective qualities such as creativity, coherence, and alignment with complex prompts. Although human involvement makes it more challenging to scale datasets and can delay the integration of the latest model advancements, it ensures reliable ground truth, addresses edge cases, and upholds ethical and cultural sensitivity. Moreover, human reviewers are capable of evaluating complex, nuanced aspects of generated images that current AI models cannot effectively assess, helping to ensure the benchmark remains both meaningful and trustworthy, even as AIGI technologies rapidly evolve.\\n\\nThank you for your suggestion regarding using prompts for annotation, we find it very valuable. However, there are exceptions, such as cases involving prompts that require outside knowledge, where the AIGI may not generate outputs as expected, necessitating human verification. Additionally, for visual quality annotations, the prompt itself has limited relevance. We also recognize that prompts could constrain annotators' creativity, potentially overlooking interesting and diverse questions. Nonetheless, we will consider your suggestion in future annotation efforts. Thank you again for your insightful feedback.\"}", "{\"title\": \"Response to rebuttal.\", \"comment\": \"Hi, thanks for the detailed response. I will keep the score as it is.\"}", "{\"title\": \"Looking forward to discussion.\", \"comment\": \"Dear Reviewer,\\n\\nThank you for your valuable and constructive feedback. We have carefully addressed your comments and revised the paper accordingly. We kindly ask if these revisions have resolved your concerns. If so, we would greatly appreciate your consideration of a raised rating.\\n\\nIf there are any remaining issues or concerns, we would be grateful if you could kindly point them out, allowing us the opportunity to discuss and address them further.\\n\\nThank you for your time and thoughtful review.\\n\\nBest regards,\\nA-Bench authors\"}", "{\"title\": \"Looking forward to discussion.\", \"comment\": \"Dear Reviewer,\\n\\nThank you for recognizing the value of our paper and for providing valuable and constructive feedback. We have carefully addressed the concerns and incorporated additional details based on your thoughtful suggestions in the questions section. We kindly hope that these revisions may merit your consideration for a raised rating.\\n\\nShould there be any remaining issues or concerns, we would greatly appreciate it if you could kindly point them out, allowing us the opportunity to further discuss and address them.\\n\\nThank you once again for your time and thoughtful review.\\n\\nBest regards,\\nThe A-Bench Authors\"}", "{\"title\": \"Official Response to Reviewer WKMw\", \"comment\": \"First of all, we would like to thank the reviewers for the time and constructive feedback. We will address your concerns point by point.\\n\\n**1. Scientific Foundations for the Semantic Reasoning**\\n\\nWe appreciate your inquiries about the scientific foundations of our selected aspects. We agree that ensuring comprehensive coverage is essential. When evaluating the semantic capabilities of LMMs, it is generally recommended to test from simpler to more complex tasks [1]. Thus, our aspect selection **follows a progression from basic to more complex dimensions**. Since our primary focus is on exploring LMM-based AIGI evaluations, we draw on previous work that uses LMMs for evaluation metric design [2, 3, 4], where LMMs are typically employed for image-related question-answer tasks.\\nTo evaluate the alignment between an AIGI and its prompt, we first assess overall subject alignment, which led us to select the first dimension, `Basic Recognition`. We then evaluate the attributes, actions, and interactions of objects in the AIGI, leading to the second dimension, `Bag-of-Words Pitfalls Discrimination`. Finally, given the creative nature of AIGI generation, which often requires external knowledge, we chose the third dimension, `Outside Knowledge Realization`.\", \"we_further_refined_our_dimensions_by_considering_specific_aspects_of_aigi_generation_that_are_particularly_relevant\": \"1. `Major Object Recognition` and `Minor Object Recognition` focus on identifying generated objects, which is a fundamental capability of AIGIs [5, 6].\\n2. `Attributes Awareness` evaluates the model's sensitivity to object attributes, which is crucial for basic evaluations [1, 7].\\n3. `Nouns as Adjectives Awareness` addresses potential issues where T2I models may misinterpret nouns as adjectives, generating objects instead of intended attributes [8, 9].\\n4. `Composition Identification` pertains to understanding compositional relationships [10, 11].\\n5. `Number of Objects Counting` assesses the model's ability to accurately count objects, which is critical for checking if the AIGI matches numerical specifications in the prompt [12, 13].\\n6. `Specific Terms Recognition` involves identifying domain-specific scenes and objects, such as geography, sports, or food, important for external knowledge [14].\\n7. `Contradiction Overcome` tests the model's ability to correctly interpret AIGIs even when their content contradicts established world knowledge [15].\\n\\nThus, our dimension selection follows **a suggested benchmark approach from simple to complex, with sub-dimensions designed to address critical points in AIGI generation and LMM evaluation.** While we cannot guarantee absolute comprehensiveness in the semantic domain, we have aimed to cover the most important aspects for evaluating AIGI capabilities with LMMs. In line with your suggestions, **we have provided corresponding citations and restructured our framework to align with the current evaluation setup.** Thank you for your thoughtful feedback.\\n\\n**2. About the Dataset Size**\\n\\nThank you for your comment. To begin with, we would like to briefly outline the dataset sizes of some of the poular LMM evaluation benchmarks. For example, the **MMBench** benchmark, which is widely recognized for its comprehensive evaluation of semantic understanding abilities, contains **2,948 multiple-choice questions (MCQs)** [1]. In the domain of perceptual quality, the Q-Bench benchmark, which is quite popular, includes around **2,990 MCQs** [16]. Based on these examples, we aimed to keep our dataset size around 3,000 MCQs. Initially, we designed a total of 3,000 MCQs, but during the data cleaning and validation process, we identified and removed some problematic or unsuitable items. As a result, we ended up with 2,864 MCQs, which was not an arbitrary decision. We hope this clarifies our rationale.\\n\\nAdditionally, since the A-Bench dataset is fully manually annotated and requires validation by at least three other humans, **the annotation process is both costly and time-consuming**. As such, it is quite challenging to scale up. We will acknowledge this limitation in our discussion section. However, we also plan to update and expand the dataset in future work, and we appreciate your understanding on this matter.\"}", "{\"title\": \"Official Responses to Reviewer 8kKW\", \"comment\": \"**4. Relation between High-level Semantic Question Answering and AIGI**\\n\\nThanks for your question. We would like to emphasize that **our proposed High-level Semantic Question Answering is strongly linked to AIGI evaluation, particularly in the area of alignment evaluation.**\\n\\n\\nWhen evaluating the semantic capabilities of LMMs, it is generally recommended to test from simpler to more complex tasks [6]. Thus, our aspect selection **follows a progression from basic to more complex dimensions**. Since our primary focus is on exploring LMM-based AIGI evaluations, we draw on previous work that uses LMMs for evaluation metric design [1, 2, 3], where LMMs are typically employed for image-related question-answer tasks.\\nTo evaluate the alignment between an AIGI and its prompt, we first assess overall subject alignment, which led us to select the first dimension, `Basic Recognition`. We then evaluate the attributes, actions, and interactions of objects in the AIGI, leading to the second dimension, `Bag-of-Words Pitfalls Discrimination`. Finally, given the creative nature of AIGI generation, which often requires external knowledge, we chose the third dimension, `Outside Knowledge Realization`.\", \"we_further_refined_our_dimensions_by_considering_specific_aspects_of_aigi_generation_that_are_particularly_relevant\": \"1. `Major Object Recognition` and `Minor Object Recognition` focus on identifying generated objects, which is a fundamental capability of AIGIs [4, 5].\\n2. `Attributes Awareness` evaluates the model's sensitivity to object attributes, which is crucial for basic evaluations [6, 7].\\n3. `Nouns as Adjectives Awareness` addresses potential issues where T2I models may misinterpret nouns as adjectives, generating objects instead of intended attributes [8, 9].\\n4. `Composition Identification` pertains to understanding compositional relationships [10, 11].\\n5. `Number of Objects Counting` assesses the model's ability to accurately count objects, which is critical for checking if the AIGI matches numerical specifications in the prompt [12, 13].\\n6. `Specific Terms Recognition` involves identifying domain-specific scenes and objects, such as geography, sports, or food, important for external knowledge [14].\\n7. `Contradiction Overcome` tests the model's ability to correctly interpret AIGIs even when their content contradicts established world knowledge [15].\\n\\nThus, our dimension selection follows **a suggested benchmark approach from simple to complex, with sub-dimensions designed to address critical points in AIGI generation and LMM evaluation.** Therefore, the High-level Semantic Question Answering framework we propose is indeed tailored to support LMM evaluation on AIGIs.\\n\\n**5. Misunderstanding of the Question-Answering Data**\\n\\nThe reviewer might have a misunderstanding here. We would like to clarify that **all the Question-Answering data in A-Bench is entirely original and is created by us, it does not come from any other existing datasets**. Specifically, each Question-answer pair in A-Bench is first annotated by a trained annotator and then verified by three additional reviewers before being finalized.\\n\\nThe only data sourced from an external dataset is the A-Bench-P2 images, which are sampled from AIGIQA-20K [16]. However, we would like to reiterate that **all the Question-Answering data in A-Bench is collected and created by us.** We hope our clarification helps clear up any misunderstanding.\"}", "{\"title\": \"Response to author comments\", \"comment\": \"I thank the authors for their detailed response.\\nThe variance analysis should be included in the paper. \\nOverall I think this is a good paper and research in this direction is warranted given the exponential progress in generative models and their capabilities. I vote to keep my ratings.\"}", "{\"title\": \"Official Response to Reviewer WKMw\", \"comment\": \"**References**\\n\\n[1] Liu Y, Duan H, Zhang Y, et al. Mmbench: Is your multi-modal model an all-around player?[C]//European Conference on Computer Vision. Springer, Cham, 2025: 216-233.\\n\\n[2] Lin Z, Pathak D, Li B, et al. Evaluating text-to-visual generation with image-to-text generation[C]//European Conference on Computer Vision. Springer, Cham, 2025: 366-384.\\n\\n[3] Cho J, Hu Y, Garg R, et al. Davidsonian scene graph: Improving reliability in fine-grained evaluation for text-image generation[J]. arXiv preprint arXiv:2310.18235, 2023.\\n\\n[4] Ku M, Jiang D, Wei C, et al. Viescore: Towards explainable metrics for conditional image synthesis evaluation[J]. arXiv preprint arXiv:2312.14867, 2023.\\n\\n[5] Nichol A, Dhariwal P, Ramesh A, et al. Glide: Towards photorealistic image generation and editing with text-guided diffusion models[J]. arXiv preprint arXiv:2112.10741, 2021.\\n\\n[6] Saharia C, Chan W, Saxena S, et al. Photorealistic text-to-image diffusion models with deep language understanding[J]. Advances in neural information processing systems, 2022, 35: 36479-36494.\\n\\n[7] Xu P, Shao W, Zhang K, et al. Lvlm-ehub: A comprehensive evaluation benchmark for large vision-language models[J]. arXiv preprint arXiv:2306.09265, 2023.\\n\\n[8] Chatterjee A, Stan G B M, Aflalo E, et al. Getting it right: Improving spatial consistency in text-to-image models[C]//European Conference on Computer Vision. Springer, Cham, 2025: 204-222.\\n\\n[9] Motamed S, Paudel D P, Van Gool L. Lego: Learning to Disentangle and Invert Personalized Concepts Beyond Object Appearance in Text-to-Image Diffusion Models[J].\\n\\n[10] Wang, Y., Zhang, L., Chen, T., & et al.. (2024). Scene graph disentanglement and composition for generalizable complex image generation. arXiv preprint arXiv:2410.00447.\\n\\n[11] Huang, L., Zhang, Y., Yang, W., & et al.. (2024). IterComp: Iterative composition-aware feedback learning from model gallery for text-to-image generation. arXiv preprint arXiv:2410.07171.\\n\\n[12] Litalby, I., Boulanger, S., & others. (2024). Make It Count: Text-to-Image Generation with an Accurate Number of Objects. arXiv Preprint, 2406.03070.\\n\\n[13] Zhou, Y., Xu, W., & Li, X. (2023). Object Count Generation in Diffusion Models. IEEE Transactions on Neural Networks and Learning Systems, 34(11), 2463-2475.\\n\\n[14] Schwenk D, Khandelwal A, Clark C, et al. A-okvqa: A benchmark for visual question answering using world knowledge[C]//European conference on computer vision. Cham: Springer Nature Switzerland, 2022: 146-162.\\n\\n[15] Vu, H., et al. (2023). WikiContradict: A Benchmark for Evaluating LLMs on Real-World Knowledge Conflicts from Wikipedia. arXiv preprint arXiv:2406.13805.\\n\\n[16] Wu H, Zhang Z, Zhang E, et al. Q-bench: A benchmark for general-purpose foundation models on low-level vision[J]. arXiv preprint arXiv:2309.14181, 2023.\\n\\n[17] Thomas M, Joy A T. Elements of information theory[M]. Wiley-Interscience, 2006.\\n\\n[18] Li C, Kou T, Gao Y, et al. Aigiqa-20k: A large database for ai-generated image quality assessment[J]. arXiv preprint arXiv:2404.03407, 2024, 2(3): 5.\\n\\n[19] Wu H, Zhang Z, Zhang W, et al. Q-align: Teaching lmms for visual scoring via discrete text-defined levels[J]. arXiv preprint arXiv:2312.17090, 2023.\"}" ] }
4mqt6QxSUO
A Unified Riemannian-Geometric Framework for SARS-CoV-2 Detection from CT Scans
[ "Yiyang Niu", "Shun Liu", "Ryan Han", "Hao Xie", "Hanyi Yu", "Chen Heng", "Zhenghan chen" ]
We present a novel, theoretically grounded framework for automated SARS-CoV-2 detection from pulmonary Computed Tomography (CT) scans, integrating cutting-edge concepts from statistical learning theory, optimal transport, and information geometry. Our approach begins with a submodular optimization-based image selection protocol, utilizing a continuous greedy algorithm. The feature extraction process employs a Riemannian geometry-inspired attention mechanism, where feature integration is formulated as geodesic interpolation on a manifold induced by the Fisher Information Metric. We introduce a unified decision-making framework based on proper scoring rules and Bregman divergences, encompassing multiple voting schemes with proven consistency and asymptotic normality properties. To address domain shift, we develop an adversarial domain adaptation technique using the Wasserstein-Fisher-Rao distance, complemented by a graph-based regularization term derived from Gromov-Wasserstein theory. Theoretical analysis provides convergence guarantees for the adversarial training process and establishes generalization bounds in terms of optimal transport distances. Empirical evaluation demonstrates the superiority of our approach over existing methods, achieving state-of-the-art performance on benchmark datasets. This work not only advances the field of automated medical image analysis but also contributes fundamental theoretical insights to the broader domains of machine learning and optimal transport theory.
[ "SARS-CoV-2", "Transfer learning", "Medical image identification" ]
Reject
https://openreview.net/pdf?id=4mqt6QxSUO
https://openreview.net/forum?id=4mqt6QxSUO
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yuhOVUB33u", "tYLUDTfCHo", "p9ROF8yvM9", "Ww0TGtXyoQ", "WtuNHtPoJS", "Lun8GxZVKB", "AHZ55BAEYp" ], "note_type": [ "official_review", "official_review", "official_comment", "decision", "official_review", "meta_review", "official_review" ], "note_created": [ 1730883656851, 1730294313744, 1731609791616, 1737524226622, 1730652324841, 1734855145813, 1730733223075 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission12958/Reviewer_jYQE" ], [ "ICLR.cc/2025/Conference/Submission12958/Reviewer_DzPu" ], [ "ICLR.cc/2025/Conference/Submission12958/Reviewer_jmbF" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission12958/Reviewer_jmbF" ], [ "ICLR.cc/2025/Conference/Submission12958/Area_Chair_6HFu" ], [ "ICLR.cc/2025/Conference/Submission12958/Reviewer_uinq" ] ], "structured_content_str": [ "{\"summary\": \"This paper presents a novel framework for automated SARS-CoV-2 detection from pulmonary CT scans, combining advanced statistical learning theory, optimal transport, and information geometry. Key components include a submodular optimization-based image selection protocol, Riemannian geometry-inspired feature extraction via geodesic interpolation on a Fisher Information Metric-induced manifold, and a unified decision-making model with Bregman divergences. Additionally, the authors propose an adversarial domain adaptation mechanism using the Wasserstein-Fisher-Rao distance with graph-based regularization to handle domain shifts. The framework achieves state-of-the-art performance on benchmark datasets, suggesting significant contributions to both medical image analysis and theoretical machine learning.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The framework creatively applies Riemannian geometry, particularly through a novel attention mechanism based on geodesic interpolation. This approach is not commonly explored in medical imaging, setting the work apart.\", \"The proposed methods are theoretically grounded, with rigorous proofs for convergence and generalization bounds. This attention to theory enhances the credibility and robustness of the approach.\", \"By addressing the need for reliable SARS-CoV-2 detection and domain adaptation in CT imaging, the paper is highly relevant to ongoing medical challenges. The framework\\u2019s potential applications beyond SARS-CoV-2 could drive further research in medical diagnostics and transfer learning.\", \"Benchmark results indicate superior performance, especially in domain-shift scenarios, which highlights the model's practical effectiveness.\"], \"weaknesses\": [\"The reliance on advanced mathematical frameworks like Riemannian geometry and optimal transport may limit the accessibility and reproducibility of the work, as these methods require specialized knowledge.\", \"While the framework shows strong theoretical grounding, additional experiments contrasting the proposed Riemannian-geometric feature extraction with simpler alternatives would clarify the practical benefits of the added complexity.\", \"The paper could better address real-world deployment considerations, such as computational efficiency and robustness in clinical environments.\"], \"questions\": [\"Could the authors provide more empirical results comparing the proposed feature extraction with traditional methods to highlight the effectiveness of the Riemannian-geometric approach?\", \"How does the computational complexity of the adversarial domain adaptation impact the framework's scalability for large datasets or real-time applications?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper attempts to integrate advanced mathematical concepts, such as Riemannian geometry, submodular optimization, and optimal transport theory, into the field of medical image analysis.\", \"soundness\": \"1\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": \"From my point of view, this paper overclaims the contribution. This paper attempts to integrate advanced mathematical concepts, such as Riemannian geometry, submodular optimization, and optimal transport theory, into the field of medical image analysis. However, the experiment can not demonstrate its contribution. The paper also introduces an adversarial domain adaptation technique, but no ablation study has proven its efficiency.\", \"weaknesses\": \"1. There exists so many ''Meaningless Equations''. Several equations (such as Equation 4 involving the Fisher Information Metric and Equation 6 on geodesic interpolation) are overly complex and seem disconnected from the practical task of SARS-CoV-2 detection. Using these equations does not provide any clear advantage or insight into improving the detection process. Can author provide a figure that is able to show the connection among those equations and modules. Also, more experiments should be added in this paper to show how they improve the performance of the SARS-CoV-2 detection.\\n\\n2. Except for Weakness 1, this paper also makes overcomplication. The decision-making framework based on Bregman divergences and multiple voting schemes (Equations 10\\u201316) adds unnecessary layers of complexity. These methods do not appear to address the practical challenges in SARS-CoV-2 detection, and their benefits are not empirically validated. Furthermore, I consider this framework can not only serve only one task, for other tasks this framework should be work. The experimental results only present on SARS-CoV-2 detection, which have achieved high accuracy by other methods, thus weaken this paper. \\n\\n3. What's the motivation? The paper fails to adequately explain why the complex mathematical tools used are necessary for solving the specific problem of SARS-CoV-2 detection. The connection between the mathematical framework and the medical imaging task is tenuous at best. I really confuse about the paper's objectives. There is not figure or any description that can bulid a strong connection between the proposed framework and SARS-CoV-2 detection. \\n\\n4. While the paper is mathematically dense, it lacks solid empirical results that justify the introduction of complex theoretical models. There is no clear demonstration that the advanced mathematical constructs (such as geodesic-based feature integration) outperform simpler approaches commonly used in medical image classification. More experimental result that related to other datasets/tasks should be added and discussed.\\n\\n5. The poor experiment. The presented experimental results do not convincingly demonstrate that the proposed methods significantly outperform existing techniques. The improvements shown are marginal and do not seem to justify the additional mathematical complexity introduced by the paper.\", \"questions\": \"1. Why is Riemannian geometry necessary for this task, and how does it concretely improve SARS-CoV-2 detection from CT scans? Could the authors clarify how these equations impact practical performance?\\n\\n2. Can the authors provide more details on how their theoretical advancements (e.g., geodesic interpolation, adversarial domain adaptation) translate to real-world medical diagnostic improvements? Are there simpler models that achieve similar or better results?\\n\\n3. The decision-making framework seems overly complex. How does the Bregman divergence-based approach perform in comparison to standard voting or confidence aggregation methods commonly used in medical image classification?\\n\\n4. How robust are the theoretical guarantees (e.g., Theorem 3.2, Theorem 3.5) in real-world applications, and what are the specific conditions under which these guarantees hold for the dataset and task described?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Reply to Review Feedback from Associate Program Chairs\", \"comment\": \"It seems I'm unable to reply to directly to the \\\"Review Feedback from Associate Program Chairs\\\" message, so I am replying here.\\n\\nI agree with the way the associate program chairs rephrased my comments.\\n\\nChatGPT gives similar rephrasing.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"This paper proposes to integrate cutting-edge concepts from statistical learning theory, optimal transport, and information geometry in order to detect SARS-CoV-2 from pulmonary Computed Tomography (CT) scans.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"Theoretical analysis provides convergence guarantees, generalization bounds. Riemannian geometry-inspired attention mechanism, feature integration is formulated as geodesic interpolation. The Fisher Information Metric, Riemannian manifold on feature space F, Bregman divergence, feature attention, decision making methods average balloting method, hierarchical balloting.\\n\\nMathematical statements appear valid, however the overall methodology appears questionable. Results are presented on a very specific data context where accuracy is already 97% using simpler x-ray imaging.\", \"weaknesses\": \"The paper methodology seems questionable. Why begin with a focus on \\\"optimal image selection protocol\\\" which is selecting an optimal 2D slices of a 3D volume. Why not just use the entire volume? Presumably SARS-CoV-2 affects the entire volume.\\n\\nThe experimental motivation is hard to understand. As stated, basic CNNs (Xception) already apparently achieve 97.97% classification accuracy of the condition from chest X-ray imaging. 2D X-ray imaging is a much cheaper and more widely used modality than 3D CT imaging.\", \"questions\": \"Are there any other more common experimental contexts where this method might be applicable?\\n\\nPlease address the practical utility of the chosen methodology, CT slice selection, when x-rays already achieve 97% accuracy.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This work presents a theoretically grounded framework for automated SARS-CoV-2 detection from pulmonary Computed Tomography (CT) scans by integrating cutting-edge concepts from statistical learning theory, optimal transport, and information geometry.\", \"additional_comments_on_reviewer_discussion\": \"This work has four reviewers. Three reviewers agree to reject this work, while the other reviewer agrees to accept this work. Hence, this work can not be accepted in ICLR 2025.\"}", "{\"summary\": \"The paper presents a framework for SARS-CoV-2 detection from CT scans, integrating advanced concepts from statistical learning theory, optimal transport, and information geometry.\", \"soundness\": \"1\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": \"The method is illustrated in details.\", \"weaknesses\": \"1. Lack of clear motivation. SARS-COV-2 detection from CT scans have been widely explored in past few years. What is the innovation of such design? The authors should state and summarize existing method. What is the limitations of existing methods? What is differences between proposed method and existing detection methods?\\n2. Lack of quantitative comparison experiments. Does the proposed method perform better with existing method? The paper does not adequately explain how the theoretical framework connect to experiments or analysis. \\n3. The writing lacks a cohesive structure that would typically guide readers from the theoretical underpinnings to their practical application in experiments, which makes it challenging to grasp the significance of the theoretical contributions in the context of the experiments conducted.\", \"questions\": \"Please refer to Weakness\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"1\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}" ] }
4mni4W1ZXy
Regularity explains emergence
[ "Yi Wang", "Zhiren Wang" ]
We investigate the mechanisms behind emergence in large language models from the viewpoint of the regularity of the optimal response function $f^*$ on the space of prompt tokens. Based on theoretical justification, we provide an interpretation that the derivatives of $f^*$ are in general unbounded and the model gives up reasoning in regions where the derivatives are large. In such regions, instead of predicting $f^*$, the model predicts a smoothified version obtained via an averaging operator. The threshold on the norm of derivatives for regions that are given up increases together with the number of parameters $N$, causing emergence. The relation between regularity and emergence is supported by experiments on arithmetic tasks such as multiplication and summation and other tasks. Our interpretation also shed light on why fine-tuning and Chain-of-Thought can significantly improves LLM performance.
[ "large language model", "emergence ability", "approximation", "scaling law", "regularity" ]
https://openreview.net/pdf?id=4mni4W1ZXy
https://openreview.net/forum?id=4mni4W1ZXy
ICLR.cc/2025/Conference
2025
{ "note_id": [ "oTFQyX3baK", "mwlEnXlGVH", "lldARp6M38", "liECfZZOmv", "hAOro3lPP8", "RU4RK38HBs", "LfRswTo7hJ", "J9sCpNqtmc", "6hV9LV8V8h", "5hTGyAfbbs", "3kz14lmvcP" ], "note_type": [ "official_comment", "official_comment", "comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_review" ], "note_created": [ 1732772438947, 1732773610300, 1733955036575, 1730653530320, 1732774071625, 1733192295459, 1730482068487, 1732776382601, 1731378426860, 1732776203621, 1730565799466 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission8136/Authors" ], [ "ICLR.cc/2025/Conference/Submission8136/Authors" ], [ "ICLR.cc/2025/Conference/Submission8136/Authors" ], [ "ICLR.cc/2025/Conference/Submission8136/Reviewer_wYtf" ], [ "ICLR.cc/2025/Conference/Submission8136/Authors" ], [ "ICLR.cc/2025/Conference/Submission8136/Reviewer_4byG" ], [ "ICLR.cc/2025/Conference/Submission8136/Reviewer_hiKf" ], [ "ICLR.cc/2025/Conference/Submission8136/Authors" ], [ "ICLR.cc/2025/Conference/Submission8136/Reviewer_tDVV" ], [ "ICLR.cc/2025/Conference/Submission8136/Authors" ], [ "ICLR.cc/2025/Conference/Submission8136/Reviewer_4byG" ] ], "structured_content_str": [ "{\"comment\": \"We thank the reviewers for their careful reading and valuable opinions. We accept the main criticism that more experiments beyond arithmetic tasks are needed to support general causality, and explain some obstructions to that goal in our responses below. We will spend more time on designing experiments to support our interpretative frame work and write a new version, and are not uploading a revision for now. We post our responses below and will be happy to receive more advice from the referees in the next few days.\"}", "{\"comment\": \"Thank you very much for the insightful comments.\\n\\n-Response to \\\".. highlight exactly what constitutes the list of ``quantitative/concrete\\\" predictions proposed by the theory.\\\"\\n\\nThe way to impose threshold of emergence is when the slope of accuracy curve is changing. There are significant slope change in the plot. In a future version, we will add a description to address this point.\\n\\n-Response to \\\".. The toy model experiments using ResNet don\\u2019t closely match the large language models (LLMs) setup. \\\" \\n\\nWe are optimistic that our interpretation is universal but open to other possibilities, and we accept the fact that ResNet is very different from LLMs. We would prefer to think this is rather a limitation in our experiments rather than of the theory, but would love to learn about why a priori LLMs and ResNet would require intrinsically different mechanisms for emergence. \\n\\n-Response to \\\"Choosing tasks more closely aligned with the theory would make the paper\\u2019s ideas clearer and more applicable.\\\"\\n\\nWe chose arithmetic tasks as it is mathematically possible to compute derivative of the target function for them. For a real-world question, for instance, \\n\\\"What is Shakespeare's most famous masterpiece?\\\" it would be very difficult to analyze or even estimate the size of derivative of the target function. With tokenization and embedding, the function may be distorted in a very complicated way, and the solution's implicit relationship to the variation of the input is hard to measure. We would appreciate insights on this obstruction to the implementation of more profound experiments.\\n\\n-Response to \\\"Can the theory predict specific model sizes or conditions where emergent behavior happens?\\\"\\n\\nThis is a very relevant question and we appreciate it. We believe this question is closely related to the next question you ask, as well as the relation between $N$ and $\\\\epsilon(N)$ that reviewer 4byG asks about. It is theoretically possible to make this estimate by going through the proof of Theorem 2.5 and the main ingredient of the resulting estimate will be the tail shape of the distribution of derivatives at all input-output pairs in the training dataset of the LLM. The fact that this distribution is not publicly available and expensive to compute even if the training data is fully accessible makes it hard to make a precise estimate. \\n\\n-Response to \\\"Could you design an experiment with an autoregressive transformer model that would produce results more relevant to the theory?\\\"\\n\\nAs we mentioned above, the main difficulty lies in determining the derivative norms of the ground truth target function, which made us stick to ResNet and arithmetic tasks in current experiments. We would appreciate any advice on the design of that part of the experiment. \\n\\n-Response to \\\"Can the theory\\u2019s error bounds predict error rates for specific tasks at different model sizes?\\\"\\n\\nIn a perfect world where the tail distribution of ground truth derivatives is known, the proof of the main theorem would allow to predict the relation between $N$ and $\\\\epsilon(N)$, namely at what difficulty level the model starts to give up tasks, which would cover part of the error rate. The remaining error rate, i.e. the chance that the model does make an effort but fails, is separate and can be estimated by the Siegel-Xu bound more directly.}\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\", \"comment\": \"We sincerely thank all reviewers for their careful reading and valuable opinions. We will continue the next stage of our research with these suggestions in mind.\"}", "{\"summary\": \"Paper introduces the idea that LLMs and other machine learning display a smoothing behavior in regions of input space where derivative of model output with respect to input is large. The behavior is said to emerge in specific regions of parameter space where training data has a \\u201clarge derivative\\u201d in such regions of input space the result is that the network learns a \\u201csmoothed version\\u201d of the input output map rather than the map itself. The\\nclaim is that the averaging behavior scales with parameters number and can yield to \\u201cemergence\\u201d-- where performance of model jumps on specific tasks as a function of parameter number. The authors introduce and prove a theorem which states that when a model, neural network map, cannot meet a performance standard within epsilon, then the model will learn an averaged version of the training data. The paper then provides numerical experiments with ResNet for fitting a trigonometric function and then uses the Qwen chat model for some analysis of algebraic operations.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"I found the central claim interesting but preliminary for several reasons. Theoretical insight into how computations in language models can achieve zero shot task behavioral changes\\u2013 for example\\u2013 sorting a last in ascending vs descending order based on small changes in prompt are interesting. The idea that behavior on such tasks is influenced by the magnitude of local derivative of output on training data leading to learning of an averaged function are interesting - -although it isnt clear how the smoothed function can perform computations insight clear.\", \"weaknesses\": \"Technically, I find the notion of derivative in token space to be problematic. I have worked on similar problems in the case of CNNs where the notion of the derivative is well defined because inputs can be taken to be over the real numbers.\\n\\nThe problem with prompts is that tokenization causes the input domain for networks to be discrete valued (say integer valued), and the nature of the derivative on such input spaces is more more subtle. How is the derivative to be defined on such spaces? The problem is that the local behavior of a derivative taken on Z embedded into R is not representative of the notion that the authors seek\\u2013 which is a function that measured changes on input instances. \\n\\nTherefore, I would like to see a much more rigorous development of the main theorem with specific definition and analysis of the derivative for token valued functions which are the main object of study for LLMs. \\n\\n\\nSecond, the numerical experiments in the paper are very limited\\u2013 the title of the paper is about language models, but the first experiment is on ResNet. \\n\\nThe language model experiment is limited and I do not see a global investigation of this notion of the network derivative in different regions of parameter space and the input-output function f or the \\u201csmoothed version S*f. \\n\\nCan the authors systematically evaluate the derivative and inferred the smoothed input-output function on a more general class of language models? \\n\\nTo solidify their central claim, can the authors analyze models of increasing size showing convergence to their central claim with model size?\", \"questions\": \"How do the authors define the derivative over token valued neural networks?\\n\\nCan the authors systematically evaluate the derivative and inferred the smoothed input-output function on a more general class of language models? \\n\\nTo solidify their central claim, can the authors analyze models of increasing size showing convergence to their central claim with model size?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you very much for the insightful comments.\\n\\n-Response to \\\"I would like to see a much more rigorous development of the main theorem with specific definition and analysis of the derivative for token valued functions which are the main object of study for LLMs.\\\"\\n\\nWe appreciate this question and wish to hear more of your insight. Our assumption is that an appropriate tokenization is an optimally smoothified embedding of a discrete natural dictionary, in the sense that words with similar meaning are closer to each other. And there are many dimensions to represent similarity in different aspects. In fact, the subsequent layers of an LLM are smooth neural networks that treat the tokenized inputs continuously. The success of these models itself implies that the smoothness of the tokenization. Please share your opinion on this view.\\n\\n-Response to \\\"the numerical experiments in the paper are very limited\\\"\\n\\nThe phenomenon of the numerical experiments in the Resnet sheds light on the fundamental philosophy of the issue that we are discussing. While we agree it is a toy experiment, we would like to include it to provide the first evidences. We plan to design more experiments in an LLM setting.\\n\\n-Response to \\\"The language model experiment is limited\\\"\\n\\nWe agree that the experiments only covers a very special type of tasks and are short of showing the full landscape of the function $f$. The main obstruction is, indeed as you ask in the next question, the difficulty in systematically evaluate the derivative of $f$.\\n\\n-Response to \\\"How do the authors define the derivative over token valued neural networks?\\\"\\nAs mentioned above, we think token valued inputs should be viewed as continuously valued once they are embedded as vectors in the sense that nearby vectors have similar meanings. But we are open to hear different opinions on this issue.\\n\\n-Response to \\\"Can the authors systematically evaluate the derivative and inferred the smoothed input-output function on a more general class of language models?\\\"\\n\\nWe acknowledge that this is hard and in fact it is what limited us to arithmetic tasks whose derivatives are more explicit. We can think of evaluation approaches on some other task classes but they are still partial and not systematic enough. Any advice or references will be appreciated.\\n\\n-Response to \\\"can the authors analyze models of increasing size showing convergence to their central claim with model size?\\\"\\n\\nWe are not sure if we understand this question but assume that you are asking about making the convergence more effective, or the relations between $\\\\epsilon(N)$ and $N$. The short answer is yes on the theoretical level. But the longer answer is that the quantitative estimates requires knowledge about the tail distribution of derivative norms of training data used by the LLM, which is not accessible.\"}", "{\"comment\": \"I thank the authors for their response. I have read the other reviews and responses and I will keep my score; as the authors have mentioned, the causal link between their proposed framework and experiments need to be strengthened if they want to support more general claim\\u2014 otherwise, a reframing about the scope of the paper is needed.\\n\\nI appreciate the author\\u2019s response to my questions and understand that it would be difficult to derive precisely, but an estimate that can predict the model scale at which breakthroughness occurs, on any task, would be compelling. Right now, the theory gives some implications for some looser trends that should be observed with the relevant quantities (eg. k, d). However, I do think this is an interesting perspective where things like fine-tuning, chain of thought, and the difficulty of tasks exhibiting \\u2018linearity\\u2019 can have an intuitive explanation through looking at regularity. It might be interesting to include experiments even in simpler settings which do exhibit the contrapositive, eg. The authors claim that tasks which do not exhibit emergence must be highly regular.\"}", "{\"summary\": \"This paper studies emergence of capabilities as a function of model size. It tries to argue that \\\"emergence\\\" happens in cases where the\\nderivative of the ground truth is very large, and where larger models manage to approximate better. Experiments are provided 1) running a series of ResNets on a small domain sin/cos function and 2) querying Qwen models on multiplication, addition, and language-formulated multi-step addition.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"The idea to try to relate expressiveness of the model in terms of its ability to approximate steeper functions could be interesting,\\nif developed well.\", \"weaknesses\": \"To begin with, the paper is very badly written, to the point of being unreadable. I had to apply a lot of guessing goodwill to try to understand what the main claims are. To illustrate with one example: The statement of Theorem 2.5 says \\\"Under assumptions 2.4\\\" (this is an assumption on the boundedness of the difference between loss of the minimizer and loss of the parametric minimizer with N parameters, called (5) in the paper) \\\"... Instead of the upper bound (5), which yields an infinite value...\\\". How can you assume a finite bound and then say it's infinite?\\nSadly, the paper is so full of defectuous English that even with the best of interpretations it is not possible to follow beyond the vague main ideas.\", \"it_is_not_really_clear_what_the_contributions_are_on_top_of_the_work_cited\": \"Siegel and Xu,20, E et al 22, Wu 2023. Beyond a combination of results, what is new?\\nSurely, it is known that large variability of the ground truth around a point gives more trouble to a model, and larger models interpolate better. \\n\\nAnother weakness pertains to the experiments. It is difficult to see how they illustrate the theoretical claim. First, a lot of assumptions are being made on the derivative, which is the key object of study. Like line 273: \\\"we will assume that [the embedding] keeps the metric space structure of the set {0, 1, \\u00b7 \\u00b7 \\u00b7 , 9}\\\" - without any justification I don't see why this is true. It is not clear to me how Figs. 1 and 2 demonstrate *any* emergence (and error bars are completely missing everywhere). \\nFor the Qwen based LLM experiments, I am surprised how small the dataset is (128?). \\nThere might be a potentially interesting observation in Lemma 3.2 saying that derivatives of middle digits are larger and thus harder to learn for small models, but the way this is written it is unclear whether this is true, and there might be confounding issues here (for instance, it's easy to guess whether the last digit is even or odd, given the two numbers to multiply; it could be that allowing 0 for the first digit increases the probability that guessing 0 there is correct, etc etc).\\nFigs. 3 and 4 are not very conclusive without error bars, especially for such small training sets.\\n\\nThis paper needs to be carefully rewritten (and it wouldn't hurt to use a language model for grammar control). Apart from grammatical errors, there is general sloppiness (for instance, line 424 goes from \\\"Grace\\\" to \\\"Tina\\\", to name just one of many examples. Or the missing definition of \\\"i\\\" in the sum in line 219. What are d and k in line 219... Etc. )\\nThe color scheme on all figures should be unified to go from lighter to darker (or something like that) for larger models - it doesn't help to have a color mix.\", \"questions\": \"1) I am unclear on what exactly your contribution is and what was already implicit in prior work - can you make that precise?\\n2) Why is the assumption that embeddings of digits preserve the metric space structure true?\\n3) Why are your datasets so small? What are the error bars? What am I supposed to see in the Figures ?\\n4) What are the bars above the variables starting line 282?\\n5) Sec 4.2. CoT: can you say anything more specific beyond speculation? Why are the derivatives multiplying and reducing?\\n6) Why do you need a 2-component (2dim) function in your first example line 219 (why is one component not enough here?)?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": [\"Response to Q1 : We prove that the optimal policy for the model is to give up difficult tasks. So it is \\\"Prediction vs No prediction\\\" instead of \\\"Good prediction vs Bad prediction\\\". Another way to put it is that the universal approximation works cited assume bounded Sobolev norms; the proposal of this paper is that the model cuts off unbounded Sobolev norms at a suitable threshold depending on model capacity to become bounded.\", \"Response to Q2: We believe for optimization of mathematical tasks, it should be true for well-trained embeddings in a quasi-isometric sense that $1$ is closer to $2$ than to $9$. We will verify it in future versions by inspecting the embedding.\", \"Response to Q3: We will pay attention to more detailed experiment results in future versions.\", \"Response to Q4: The overline stands for decimal concatenation. By $\\\\overline{456}$ we mean the decimal integer 456.\", \"Response to Q5: Chain Rule\", \"Response to Q6: 1D is enough. We used 2D to better demonstrate the phenomenon that the prediction is at the origin when the model doesn't have enough capacity to predict.\"]}", "{\"summary\": \"This paper investigates the concept of \\\"emergent abilities\\\" in large language models (LLMs) by developing a theoretical framework based on the regularity (or smoothness) of the optimal response function. The authors suggest that LLMs approximate this response function by smoothing out regions with high derivative values, leading to approximation errors that gradually decrease as the model size, N, grows. The theory proposes that as N increases, the model can capture more complex aspects of the response function without the need for smoothing, which results in sudden improvements or \\\"emergence\\\" of new abilities. The authors present a key theorem that quantifies the relationship between model size and approximation quality. They also provide experimental evidence to support the theory, including function approximation with ResNets and arithmetic tasks to demonstrate the model\\u2019s behavior in regions with high derivatives.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"I appreciated the theoretical framework based on Siegel & Xu (2020), which links the regularity of optimal response functions with the concept of emergence. This framework offers a fresh perspective on the phenomenon of \\\"emergent abilities\\\" in large language models (LLMs).\", \"The main theorem effectively illustrates how model size relates to approximation quality, especially in regions where the optimal response function shows complex behavior. Although primarily qualitative, this theoretical foundation provides valuable insights into why larger models may perform better with irregular functions.\", \"I found some of the empirical results intriguing, particularly the scaling experiments with Qwen models that revealed various trends in arithmetic calculation outcomes.\"], \"weaknesses\": [\"Although the paper provides a unique and intuitive perspective on the mechanisms underlying emergence, it doesn\\u2019t specify a precise threshold or clear scaling rule to predict when this emergence occurs. I would appreciate it if the authors could better highlight exactly what constitutes the list of \\\"quantitative/concrete\\\" predictions proposed by the theory.\", \"The toy model experiments using ResNet don\\u2019t closely match the large language models (LLMs) setup. This setup is qualitatively different from the autoregressive transformers typically used in LLMs. While the authors argue that the theory applies to any model type, this actually highlights a limitation of the theory rather than supporting the use of ResNets to examine phenomena observed in LLMs.\", \"The choice of arithmetic tasks doesn\\u2019t clearly connect to the theory\\u2019s focus on changes in derivatives, as the observed U-shaped trend can be explained solely by the task structure. Choosing tasks more closely aligned with the theory would make the paper\\u2019s ideas clearer and more applicable.\", \"Overall, while I find the results in some parts of the paper interesting, they often appear disconnected, lacking a clear and logical progression.\", \"Presentation should be improved. In particular, it would greatly help if captions contained the necessary information to understand the content beyond what is provided in the existing title headers.\"], \"questions\": [\"Can the theory predict specific model sizes or conditions where emergent behavior happens? Is there a certain size, N, where the model shifts from smoothing f\\u2217 to accurately capturing it in areas with sharp changes?\", \"Could you design an experiment with an autoregressive transformer model that would produce results more relevant to the theory?\", \"Can the theory\\u2019s error bounds predict error rates for specific tasks at different model sizes?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Many thanks for your careful reading and detailed comments. Your opinions are greatly helpful for us to better approach the problem. Thank you for appreciating the point of using a ResNet setup to demonstrate the phenomenon before generalizing to the setting of an LLM. We agree with you that a general justification of the theory would be a high burden, and will work on it, especially in the natural language domain.\\n\\nWe also thank you for finding typos and grammatical errors. We will correct them in future versions.\", \"response_to_q1\": \"Thank you for this question which hits right to the point. It is theoretically possible to make this estimate by going through the proof of Theorem 2.5 and the main ingredient of the resulting estimate will be the tail shape of the distribution of derivatives at all input-output pairs in the training dataset of the LLM. The fact that this distribution is not publicly available and expensive to compute, even if the training data is fully accessible, makes it hard to make a precise estimate. We believe that the fact that accuracy saturates is explained by the fact that when difficulty (regularity) of task increases, that is,\\n decreases, (1) the amount of available training data at that level of difficulty dramatically decreases, (2) the scaling law requires N to scale polynomially to adapt. Combining the two means a moderate drop in \\n will require a significant upscaling of \\n to compensate, i.e. the scaling law ends and one needs other strategies such as CoT as alternatives. We will spend time to work on a new version and are not uploading a new version this time, but can confirm that a plot as what you suggested does show such saturation.\", \"response_to_q2\": \"We hypothesize that beyond a regularity threshold, all models give up and provide random answers, resulting in similar but noisy error levels, which would be consistent with our interpretation.\\nThe standard error of error are roughly proportional to mean error and displays the same saturation pattern. For example, in experiment from 3.4, for the digit k=3, the standard error is almost constant, very close to 2 across all models and all number of steps >= 3, implying the the models are returning the randomly distributed responses beyond that regularity level.\", \"response_to_q3\": \"The reason of using summation of single digits is that the regular digit addition is easier from our regularity view point, in the sense that a digit in the answer likely only depends on adjacent digits from the inputs, while the summation of multiple single digit numbes has multiple dependencies. For the two examples you mentioned, the first one is a very restrictive subset of summation of multiple single digit numbers and below the bar to see interesting emergence. For the second one, the carry matters only slightly and regularity still grows fast with number of digits and we still see the U-shape.\", \"response_to_q4\": \"Thank you for suggesting the contrast, the \\\"linear trend\\\" type represents insufficiency of training data and the \\\"breakthrough\\\" type represents high regularity of optimal response and is closer to our framework. This aligns with our hypothesis in the following way: Problems that need logical reasoning/sequential steps can be viewed as a composition of problems. The optimal function is thus a composition of a sequence of optimal functions. By chain rule, the derivative of a composition of functions is equal to the product of individual functions' derivatives. Thus, the magnitude of its derivative is much bigger than that of a single function. Knowledge-based questions are either an individual function or a sum of these functions. Therefore, its derivative is of the same magnitude as that of an individual function.\"}", "{\"summary\": \"This paper proposes an explanation for the mechanism behind emergent capabilities in large language models through the regularity of the optimal response function. The authors claim that models do not model the optimal response in regions where derivatives are large, instead opting to predict a smoother function obtained through averaging values. They justify this theoretically and have accompanying experimental results on a synthetic function and certain arithmetic tasks (multiplication, sequence of single-digit addition, and addition word problems), where some intuitions from their theory are reflected in the accuracy trends of Qwen models as the number of parameters scale.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The theory is presented clearly, and the perspective of parameter size controlling the threshold on the extent to which the model predicts an irregular optimal response function is an interesting idea. The experimental setups are clear, and the synthetic setup is particularly compelling.\", \"weaknesses\": [\"While the theory seems sound and the synthetic experiment is compelling, I still reserve some skepticism for the connection to the LLM experiments on arithmetic tasks. Particularly I believe the title of this paper \\u201cRegularity Explains Emergence\\u201d is very strong and has a high burden of proof especially given the numerous natural language tasks where emergence has been observed [1] and the extensive discussions around the empirical validity of emergence in the existing literature (eg. [2]).\", \"To expand on this point, I currently can\\u2019t disentangle whether the theory provided by the authors truly gives an explanation for emergent capabilities in LLMs as they claim, or it provides one instance where emergence can occur and one can frame a theoretical narrative around. For the arithmetic tasks, while I can see that there can be conclusions drawn from the approximations of the gradient vector that are reflected in the accuracy trends across model scales and quantities like digit position and number of summands, I\\u2019m not convinced this is a result that we wouldn\\u2019t already expect intuitively and is necessarily explained from the theoretical results. The causal connection is not strong, likely due to the limitations of the theory and how it cannot explain more nuanced trends in eg. model scale (please see Questions below for expansion on this point).\", \"In conclusion, I believe that the authors need to be more clear about the scope of their theory and the tasks considered in this work, or provide stronger connections between the observed emergence and the regularity of the optimal function. Are there examples of natural language tasks where the theory may predict a regular optimal response function and we do see linear improvements in the task across scale?\", \"As a minor comment, there are areas in the paper where the writing has some typos and grammatical errors; I\\u2019ve listed several below but I\\u2019d like to ask the authors to go over their exposition and address some of the writing.\"], \"line_19\": \"improves -> improve\", \"line_44\": \"\\\\citet instead of \\\\citep\", \"line_47\": \"task -> tasks\", \"line_53\": \"(Theorem 2.5 -> (Theorem 2.5)\", \"line_56\": \"avilable -> available\", \"line_58\": \"LLM model -> LLMs\", \"line_61\": \"method -> methods\", \"line_282\": \"and R-value function -> an R-value function\", \"line_391\": \"despite of -> despite\", \"line_482_483\": \"\\u201cOn the other hand\\u2026\\u201d sentence needs rephrasing\\n\\n[1] Srivastava, Aarohi, et al. \\\"Beyond the imitation game: Quantifying and extrapolating the capabilities of language models.\\\" arXiv preprint arXiv:2206.04615 (2022).\\n[2] Schaeffer, Rylan, Brando Miranda, and Sanmi Koyejo. \\\"Are emergent abilities of large language models a mirage?.\\\" Advances in Neural Information Processing Systems 36 (2024).\", \"questions\": \"1. Do you have any insights about how the relation between the number of parameters N and the optimal \\\\epsilon(N) is reflected in the accuracy plots for the arithmetic tasks? For instance, it seems that the threshold \\u2018saturates\\u2019, in the sense that for 32B-110B the accuracy is similar even for the digits where accuracy is not at 100%. As a visualization, could you show what the accuracy looks like for a fixed digit position and x-axis being model scale (from Figures 3-4)?\\n2. You present average error results in Appendix C for the arithmetic tasks, and while general trends are the same as accuracy, it seems much noisier and the trend is not as consistent across model scale (eg. similar error between models with a difference of 2 orders of magnitude). Do you have any explanations for this, and could you also report the standard error across examples for the average error results?\\n3. What was the reasoning behind choosing summation of single digit integers as opposed to performing regular addition on d digits, analogous to the multiplication setting? How would the results change for potentially \\u2018harder\\u2019 or \\u2018simpler\\u2019 subsets of examples on these arithmetic tasks (for example, addition where there\\u2019s no carry for the first digit, or multiplication where there\\u2019s no carry across the digits?)\\n4. From the Big-Bench paper it was shown that the tasks exhibiting the most \\u2018linear\\u2019 trend in performance were perhaps more knowledge-based or required easier text manipulations, and the tasks with more \\u2018breakthrough\\u2019 performance trends had logical reasoning/sequential steps. How would this relate under your framework? I\\u2019m not sure these differences in the tasks are necessarily reflected in the regularity of the optimal function.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
4mFEb3JvMc
A case for data valuation transparency via DValCards
[ "Keziah Naggita", "Julienne LaChance" ]
Following the rise in popularity of data-centric machine learning (ML), various data valuation methods have been proposed to quantify the contribution of each datapoint to desired ML model performance metrics (e.g., accuracy). Beyond the technical applications of data valuation methods (e.g., data cleaning, data acquisition, etc.), it has been suggested that within the context of data markets, data buyers might utilize such methods to fairly compensate data owners. Here we demonstrate that data valuation metrics are inherently biased and unstable under simple algorithmic design choices, resulting in both technical and ethical implications. By analyzing 9 tabular classification datasets and 6 data valuation methods, we illustrate how (1) common and inexpensive data pre-processing techniques can drastically alter estimated data values; (2) subsampling via data valuation metrics may increase class imbalance; and (3) data valuation metrics may undervalue underrepresented group data. Consequently, we argue in favor of increased transparency associated with data valuation in-the-wild and introduce the novel Data Valuation Cards (DValCards) framework towards this aim. The proliferation of DValCards will reduce misuse of data valuation metrics, including in data pricing, and build trust in responsible ML systems.
[ "data valuation", "fair compensation", "transparency", "fairness", "bias" ]
Reject
https://openreview.net/pdf?id=4mFEb3JvMc
https://openreview.net/forum?id=4mFEb3JvMc
ICLR.cc/2025/Conference
2025
{ "note_id": [ "sjRyRPzdva", "rnijeZRoLY", "hlLHLhSGyB", "J35Kw5Dk0z", "GyIWAXtDpI", "1QDxbHolvr" ], "note_type": [ "decision", "official_review", "meta_review", "official_review", "official_review", "official_review" ], "note_created": [ 1737523877964, 1730724169802, 1734129659059, 1730356449459, 1730825150877, 1730919764928 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission7961/Reviewer_7B5y" ], [ "ICLR.cc/2025/Conference/Submission7961/Area_Chair_c4Bv" ], [ "ICLR.cc/2025/Conference/Submission7961/Reviewer_Xakx" ], [ "ICLR.cc/2025/Conference/Submission7961/Reviewer_7aic" ], [ "ICLR.cc/2025/Conference/Submission7961/Reviewer_sEHS" ] ], "structured_content_str": [ "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"This paper investigates the properties of data valuation metrics, specifically their bias and instability, through case studies on real-world datasets. The authors highlight the limitations of data valuation metrics, including the impact of preprocessing techniques, minority groups, and technical and ethical side-effects. To address these limitations, they introduce DValCards, a standardized framework for reporting critical information and supporting decision-making about data valuation methods. The paper presents results on the instability of data valuation methods across different imputation techniques and highlights the implications of these inconsistencies using a case-study.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The authors provide a elaborate and comprehensive analysis of the impact of preprocessing techniques and class imbalance on data valuation metrics, especially imputation methods and their effects on class balance and rank stability. 12 Open-ML datasets are considered and 4 Data Valuation frameworks are chosen for comparison.\", \"The introduction of DValCards is a valuable contribution to the field, providing a standardized framework for reporting critical information about data valuation metrics.\", \"The paper raises important ethical considerations and implications of using data valuation metrics in context of a case study that highlights risks to undervalued groups.\"], \"weaknesses\": \"- The effectiveness of imputation preprocessing methods in standard data valuation tasks (eg. weighted training, noisy label detection) is not thoroughly evaluated, and the authors could provide more evidence. Instability of values is known in Data Valuation literature, but specifics with respect to imputation methods are not widely studied.\\n- Since this paper is trying to unify a setting for all Data Valuation methods, it could benefit from expanding its scope to include runtime analysis (FLOPS analysis of the method), limitations with respect to scaling and tradeoff with performance. It would be worth including the impact of validation sets [1,2] on data value. It might be worth looking into other works to unify data valuation frameworks such as [3]\\n- The DVal Report in the DVal Card is reporting the data value range. However for a dataset, this may vary by just varying either the learning algorithm , or the performance metric or the valuation framework. Data Values (especially their min max values) can have varying values but their rank stability, performance on standard data valuation tasks (noisy label detection or weighted training for instance) can help improve this part of the report. \\n\\n[1] Kwon, Yongchan, and James Zou. \\\"Data-oob: Out-of-bag estimate as a simple and efficient data value.\\\" International Conference on Machine Learning. PMLR, 2023.\\n\\n[2] Jahagirdar, Himanshu, Jiachen T. Wang, and Ruoxi Jia. \\\"Data Valuation in the Absence of a Reliable Validation Set.\\\" Transactions on Machine Learning Research.\\n\\n[3] Jiang, Kevin, et al. \\\"Opendataval: a unified benchmark for data valuation.\\\" Advances in Neural Information Processing Systems 36 (2023).\", \"questions\": [\"Can the authors provide more information on whether imputation methods actually improve performance on standard valuation tasks ?\", \"It would be nice to use data valuation methods (used in certain places) instead of data valuation metrics (used more commonly in the paper), since they are generally referred to as frameworks.\", \"Can we see more examples of DVal Cards in this work? For a major contribution, the main paper has only one DVal Card and it seems to be a generic setting. It would be really interesting to see multiple DVal Cards and will reinforce the utility of having such a framework. A comparison\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This paper studies the sensitivity of data valuation characteristics through an experimental study on classification datasets. The authors highlight how common approaches can lead to misinterpretation by drastically changing the values of these metrics, and proposes to reduce the potential for misuse by pairing datasets with \\\"Data Valuation Cards.\\\"\\n\\n## Strengths and Limitations:\\n\\n- See below\\n\\n## Recommendation\\n\\nMy recommendation is to accept the paper at this time given the feedback from reviewers and the lack of a rebuttal.\", \"additional_comments_on_reviewer_discussion\": \"The paper did not receive a rebuttal from the authors, and there was no discussion with reviewers.\"}", "{\"summary\": \"The paper studies data valuation and specifically the transparency of it. The authors highlight some issues of existing data valuation methods, in particular the bias of data values, which can result in technical and ethical consequences. The authors provide empirical evidence for such claims. The authors propose a framework called DValCards to encourage transparence in data valuation.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The studied problem of data valuation is important and growing.\", \"The paper is relatively well written.\", \"The experimental results are with respect to real-world datasets.\"], \"weaknesses\": [\"The claims against existing works are largely observational and empirical, and do not seem to be theoretically supported.\", \"The motivation for the DValCards can be made better. It seems that before Section 4, the authors are describing the issues with existing data valuation methods. In Section 4, where one might expect a mitigation or solution, the framework that does not seem to address these issues is described.\", \"Furthermore, the framework itself does not seem to be very extensively described or examined, in terms how it is applicable and beneficial.\"], \"questions\": \"Is there a reason to limit to supervised classification? Is the method limited or more widely applicable to other settings?\\n\\nIn Section 3.1\\n`We find that varying the applied data imputation method results in appreciable variation of data val-\\nues,`\\n\\nAnd similar mentions of the instability of data valuation methods. The question is: Is the instability arising from the definition of the data valuation? Or from the estimation of the data valuation? The former suggests a fundamental methodological flaw of data valuation while the latter is due to the lack of better and more efficient computational techniques.\\n\\nFollowing the previous question, if instability is a key limitation (of either data valuation, or estimation methods), specifically how does the proposed framework in Section 4 address it, by advocating for transparency?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper conducts comprehensive empirical evaluations of existing data valuation metrics, identifying significant biases and instability in data-centric machine learning (ML). Key findings include: (1) common and inexpensive data pre-processing techniques can drastically change estimated data values; (2) subsampling using these metrics may exacerbate class imbalance; and (3) data valuation methods may undervalue data from underrepresented groups, raising ethical concerns. In particular, marginal contribution methods, such as Shapley-based approaches for tabular classification, demonstrate high variability due to data imputation preprocessing and may affect class balance and group fairness. To address these challenges and improve transparency, the paper introduces the novel Data Valuation Cards (DValCards).\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"The paper is well-motivated and conveys an essential message: existing data valuation methods, primarily designed for machine learning, may be unsuitable for data compensation in data markets. It highlights various practical challenges that emerge when these methods are repurposed for economic applications. Backed by comprehensive experimental analysis, the paper\\u2019s findings offer valuable insights and serve as practical guidelines for the effective design and implementation of data valuation metrics in data market contexts.\", \"weaknesses\": \"The paper raises an important issue, though its main limitation appears to be the lack of a fundamental solution. While DValCards help mitigate the issues of instability and fairness, they primarily serve as a more detailed documentation tool for data valuation methods.\\n\\nThe paper makes a valuable contribution by highlighting the challenges of existing data valuation approaches through extensive empirical evaluations, including issues related to instability, class imbalance, and fairness. However, some of these findings are not entirely unexpected. For instance, the instability of current metrics when different data imputations are applied is not very surprising: if the dataset changes, the data point values will change. In addition, it is not entirely clear why stability to data imputation should be considered an inherent property of a data valuation metric. Regarding fairness, it is not surprising that existing methods, which primarily aim to optimize test accuracy, might introduce bias. Nevertheless, the systematic evaluation using real-world data is valuable and provides an important, evidence-based perspective on these issues.\", \"questions\": \"None.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper presents an empirical study of existing data valuation methods in terms of their sensitivity to pre-processing, the consequences of using them for data selection, and the tendency of undervaluing minorities.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The paper is easy to follow.\"], \"weaknesses\": [\"The paper's technical contribution is a bit limited, mainly focusing on evaluating existing methods.\", \"The findings from the paper are not novel. (1) Regarding the sensitivity to data imputation methods: data valuation fundamentally determines the contribution of a given data point based on the other data used together for training; hence, it is straightforward to see that the value of a data point would change depending on the choice of the imputation method because different imputation methods would change the formation of other data points. (2) Regarding the class imbalance: it is also natural that directly using data values to remove data would lead to class imbalance. This is because data valuation by design would assign same score to same data points. As a result, one would either remove two identical data points at the same time or keep them altogether, which in turn leads to a loss of balance in class representation. In fact, there has been existing work theoretically characterizes the limitation of using data valuation for data selection: https://arxiv.org/abs/2405.03875 (3) Regarding the last finding about undervaluing the minorities: this validity of this finding depends on the choice of validation data. If the validation comprises data points all from the underrepresented group, then the value of that group would be high instead of low as reported by the paper.\"], \"questions\": \"See above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
4ltiMYgJo9
A closed-loop EEG-based visual stimulation framework from controllable generation
[ "Yiwei Kong", "Dongyang Li", "Jiahua Tang", "Chen Wei", "Quanying Liu" ]
Recent advancements in artificial neural networks (ANNs) have significantly refined methodologies for predicting the neural coding activities of the ventral visual stream in human and animal brains based on visual stimuli. Nevertheless, the endeavor to control visual stimuli to elicit specific neural activities continues to confront substantial challenges, including prohibitive experimental costs, the high-dimensional nature of stimuli, pronounced inter-individual variability, and an incomplete understanding of neuronal selectivity. To address these impediments, we propose a novel electroencephalography (EEG)-based closed-loop framework for visual stimulus. Leveraging this framework, we can identify the optimal natural image stimulus within a theoretically infinite search space to maximize the elicitation of neural activities that most closely align with desired brain states. Our framework employs advanced ANN ensemble models to ensure the reliability of neural activity predictions. Furthermore, we conceptualize the brain coding predicted by the ANN model as a non-differentiable black-box process, allowing us to directly analyze the relationship between the administered visual stimuli and the targeted brain activity. Our research demonstrates that, independent of the exactness of the ANN-predicted brain coding, the proposed framework can procure the theoretically optimal natural image stimulus at given cycle steps. Moreover, our method exhibits generalizability across different modalities of brain-specific activity regulation. Our code is available at https://anonymous.4open.science/status/closed-loop-F2E9.
[ "Neural modulation; EEG; Close-loop;" ]
Reject
https://openreview.net/pdf?id=4ltiMYgJo9
https://openreview.net/forum?id=4ltiMYgJo9
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yKl0xRa8gQ", "uK05wqk2R0", "s9h8cxTzIw", "nJH9ZUH55a", "nBkGT6wFqP", "j2nsogte01", "flqNX3mUoq", "fkoE9ogJE8", "fL6VHOIviZ", "dsWc4bWvN2", "dc2kkZng8f", "aToYoc1Rsm", "ZrmtCHFlgh", "Y5RB6DV54T", "QJWd0hPjll", "OmtawzdKZR", "MGY6J8apAx", "KriIWhwZhg", "HynurumOET", "Hh59AQHbW1", "6A9ZuCgKhF", "5t1N9w8URe", "392mEmjCws", "38Qa6IsUOI", "1vQLezICgK" ], "note_type": [ "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "decision", "official_comment" ], "note_created": [ 1730703823740, 1732588435417, 1732223984630, 1732389225077, 1730664774451, 1730445672978, 1732590602160, 1733680860829, 1732760854191, 1732222802260, 1732227108833, 1732266540585, 1732225653284, 1732524010253, 1732228101553, 1732688863736, 1732524642517, 1732220906736, 1732755954097, 1733197126332, 1730562910834, 1732228392726, 1733220776282, 1737523382846, 1733202945072 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission134/Reviewer_FvdH" ], [ "ICLR.cc/2025/Conference/Submission134/Authors" ], [ "ICLR.cc/2025/Conference/Submission134/Authors" ], [ "ICLR.cc/2025/Conference/Submission134/Authors" ], [ "ICLR.cc/2025/Conference/Submission134/Reviewer_CRyH" ], [ "ICLR.cc/2025/Conference/Submission134/Reviewer_4DNY" ], [ "ICLR.cc/2025/Conference/Submission134/Reviewer_FvdH" ], [ "ICLR.cc/2025/Conference/Submission134/Area_Chair_K625" ], [ "ICLR.cc/2025/Conference/Submission134/Authors" ], [ "ICLR.cc/2025/Conference/Submission134/Authors" ], [ "ICLR.cc/2025/Conference/Submission134/Authors" ], [ "ICLR.cc/2025/Conference/Submission134/Reviewer_4DNY" ], [ "ICLR.cc/2025/Conference/Submission134/Authors" ], [ "ICLR.cc/2025/Conference/Submission134/Reviewer_CRyH" ], [ "ICLR.cc/2025/Conference/Submission134/Authors" ], [ "ICLR.cc/2025/Conference/Submission134/Authors" ], [ "ICLR.cc/2025/Conference/Submission134/Reviewer_EgtU" ], [ "ICLR.cc/2025/Conference/Submission134/Authors" ], [ "ICLR.cc/2025/Conference/Submission134/Authors" ], [ "ICLR.cc/2025/Conference/Submission134/Authors" ], [ "ICLR.cc/2025/Conference/Submission134/Reviewer_EgtU" ], [ "ICLR.cc/2025/Conference/Submission134/Authors" ], [ "ICLR.cc/2025/Conference/Submission134/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission134/Reviewer_FvdH" ] ], "structured_content_str": [ "{\"summary\": \"This paper develops a method for choosing the optimal image stimulus to present to a human subject to elicit a specific desired pattern of neural activity (as measured using EEG).\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"The problem that this paper takes on is very interesting. I am aware of previous research that has attempted to find preferred visual stimuli for single neurons, so as to figure out what that neuron \\\"prefers\\\", but this paper seems to be taking on a related but quite different issue, which is: given a whole pattern of population activity, what stimulus would elicit that overall pattern? This seems like a project that may have useful clinical applications in the future, as well as being scientifically interesting in its own right.\", \"weaknesses\": \"I found the paper hard to follow. I admit that a contributing factor here may be my own lack of experience with respect to some of the techniques the paper uses, such as EEG data, diffusion models, and genetic algorithms. However, I do think that the presentation of the paper could be much clearer, and I will list some examples below of specific issues that came up with respect to clarity.\\n\\n- Most of the figures I did not understand, and as far as I could tell, the figures aren't referred to in the main text, so it was difficult to situate what the purpose of each figure was in the overall narrative of the paper. \\n- It is unclear what the purpose of the MDP is in Section 3.2 (see Questions below).\\n\\nIt would probably have been useful to include a Supplemental section to explain some of the methods in more detail.\", \"questions\": \"In Sec 3.2, what do the actions and states of the MDP refer to in this context? Are the actions features, because the algorithm is selecting features of the neural activity to represent? Or are the actions the selected images to be used as visual stimuli?\\n\\nWhat is the motivation for not updating the gradients in the model? The abstract says this allows \\\"us to directly analyze the relationship between the administered visual stimuli and the targeted brain activity\\\", but I wasn't sure why this is the case or where in the paper this motivation is fully explained or justified.\\n\\nIn Figure 1, what is the difference between \\\"selection\\\" and \\\"action\\\"?\\nIn Fig 2, the distance metric seems to be applied to images, but I thought the point was to compare induced and target neural activities.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper revision\", \"comment\": \"We would like to express our sincere gratitude to the reviewers for their insightful and valuable comments on our manuscript. Based on the feedback, we have made several updates to improve the clarity and quality of the writing, as well as to better support the experimental results.\\n\\nTo facilitate the comparison between the two versions, we have listed the changes below:\\n\\n**1. General Revisions**: In response to the written errors and unclear expressions raised, we have revised the manuscript to improve its clarity and readability. Specifically, we have corrected the mathematical notations, added the correct references for each figure and table, and provided additional detailed implementation information in **Section 4**.\\n\\n**2. Figures and Descriptions**:\\n\\n- **Figure 1**: For the extracted features section in our framework, we have updated the caption to provide a more comprehensive illustration. This revision offers a clearer understanding of our motivation.\\n\\n- **Figure 2**: We have updated **Figure 2**, which offers a clearer and more concise presentation. On the left, the figure provides a step-by-step breakdown of the process, with more explicit descriptions of each step in the framework. On the right, it includes a detailed illustration of the pipeline, followed by descriptions of the two case studies.\\n\\n**3. Section 3 Reorganization**:\\n\\nWe have reorganized the **Section 3 \\\"Method\\\"** and created a new **subsection 3.1: \\\"Closed-loop Framework\\\"** to further clarify the mathematical formula, making the section more intuitive to readers. In **Sections 3.3 and 3.4**, we have updated the descriptions of the algorithms (or pseudocode) for both the retrieval and generation settings, which now include more detailed steps and improved clarity to better convey our implementation process.\\n\\n**4. Appendix Updates**:\\n\\n- **Appendix A.1**: More details has been added regarding the dataset, the interactive search process, and the heuristic generation algorithm, providing a complete insight into our methodology. (**Page 13, FvdH**)\\n\\n- **Appendix A.2**: We have included extensive quantitive results of all subjects in both cases. We have also provided performance comparisons of iterative improvement in different target images across all 10 subjects in EEG semantic representation and spectrum intensity (**Table A.1**). Additionally, we have reported the improvement through in-subject t-tests across all subjects for both cases and correlation analysis between EEG features and CLIP representations (**Figure A.3**). (**Page 15, 4DNY**)\\n\\n- **Appendix A.3**: In this section, we elaborate on the validity verification of synthetic EEG signals. Specifically, we conducted MSE and Pearson's correlation coefficient on synthetic EEG signals from AlexNet and CORnet-S across all 10 subjects. We also present results from training with ATM-S on two DNN models with pre-trained end-to-end models and randomly initialized end-to-end models (**Table A.2, Figure A.4- Figure A.8**). All these assesments are able to comprehensively vertify the performance of EEG encoding model and support our experiment settings on pretrained end-to-end Alexnet. (**Page 17, 4DNY and EgtU**)\\n\\n- **Appendix A.4**: We have added supplementary examples, including EEG semantic representation and intensity cases, where we show the iterative complete process of different subjects and targets in each case. Additionally, we have provided examples where regulation failure gradually leads to Goal Drift (**Figure A.4.2**). (**Page 23, EgtU**)\\n\\n**Changs 1&2&3** have led the improvements of the manuscript without affecting the experimental results presented in this paper. **Changes 4** brings some validations.\"}", "{\"title\": \"Rebuttal by authors\", \"comment\": \"Thank you for your recognition of our work. Here are our responses to your questions:\\n\\n**Q1: The authors state that the identified stimulus is \\\"optimal.\\\" Based on the MDP formulation of the algorithm, I understand that it finds a local minimum. Could you clarify how this approach ensures finding a global optimum, rather than a local one?**\", \"a1\": \"As you correctly point out, using the Markov Decision Process (MDP) formulation, it is possible for the algorithm to converge to a local optimum in the retrieval case, especially in non-convex search spaces. However, in our generative case, there is no real global optimum. Human visual perception is a complex system influenced by many factors, introducing significant randomness, which makes it challenging to define convex optimization problems with precise mathematical formulations.\\n\\nTherefore, we adopt heuristic methods (such as genetic algorithms) to determine the optimal results. For more detailed details on the methods in the manuscript please see **Section 3** in the updated pdf. For the generation task, we introduce random sampling in each iteration. Regarding the termination conditions, (1) if the increment of similarity between features is less than 10e-4, we consider the process to have converged, and (2) if the number of iterations reaches 90, the process stops. As shown in **Figure 4A**, the similarity between brain activity features increases with each iteration and tends to be stable. In addition, the example in **Figure 4C** shows that the optimal visual stimulation is consistent with human prior knowledge. In addition, if we relax the iteration limit and allow more iterations, the generation model may continue to optimize until it reaches the upper limit of \\\"optimality\\\", because additional iterations provide the algorithm with more opportunities to explore and improve the solution. \\n\\n**Q2: Why did you limit the comparison to the first 250 ms (Figure 4D)? While the initial 250 ms may indeed capture critical visual information, it is common in EEG analysis to display the full 1000 ms post-stimulus data. Could you elaborate on this choice?**\", \"a2\": \"Thank you for pointing this out. As shown in **Figures 4D, 4E, and 4F**, the value of 250 refers to the number of data points, not milliseconds, which means that we are showing data within 1000ms at a sampling rate of 250Hz. We realize that this may be confusing. To improve clarity, we have updated the unit to milliseconds (ms). The dataset we used, THINGS-EEG2, originally spanned 1000 ms and had a sampling rate of 1000 Hz. During preprocessing, the following steps were applied using Matlab (R2020b) and the EEGlab (v14.0.0b) toolbox as described in the dataset publication: (1) the data were filtered using a Hamming window FIR filter with a 0.1 Hz high-pass and 100 Hz low-pass filter, (2) the data were re-referenced to the mean reference and downsampled to 250 Hz [1].\\n\\n\\nReferences\\n\\n[1] Gifford A T, Dwivedi K, Roig G, et al. A large and rich EEG dataset for modeling human visual object recognition[J]. NeuroImage, 2022, 264: 119754.\"}", "{\"title\": \"Response for revision\", \"comment\": \"Thank you for taking the time to carefully review our revised manuscript and for providing such valuable feedback. Your suggestions have significantly improved the quality of our work.\\n\\nWe understand your concern about the substantial changes between the original and revised manuscripts. We have provided a detailed comparison of the two versions in the **Q4 section** of our **Global Response**. We encourage you to review this comparison to gain a better understanding of the specific modifications we have made.\\n\\nWe hope these revisions have successfully addressed at least most of the concerns raised in your previous review. Thank you again for your time and consideration.\"}", "{\"summary\": \"This is a highly innovative study demonstrating the capability to identify visual stimuli that closely match the original stimuli eliciting specific EEG activity patterns. The algorithm is well-explained and, to my knowledge, represents one of the first successful applications of this approach with EEG data.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"4\", \"strengths\": \"Very interesting study, timely, solves an important question, is generalizable.\", \"weaknesses\": \"I can hardly identify any significant limitations in the current study. However, I have two questions:\\n\\nThe authors state that the identified stimulus is \\\"optimal.\\\" Based on the MDP formulation of the algorithm, I understand that it finds a local minimum. Could you clarify how this approach ensures finding a global optimum, rather than a local one?\\n\\nWhy did you limit the comparison to the first 250 ms (Figure 4D)? While the initial 250 ms may indeed capture critical visual information, it is common in EEG analysis to display the full 1000 ms post-stimulus data. Could you elaborate on this choice?\", \"questions\": \"The authors state that the identified stimulus is \\\"optimal.\\\" Based on the MDP formulation of the algorithm, I understand that it finds a local minimum. Could you clarify how this approach ensures finding a global optimum, rather than a local one?\\n\\nWhy did you limit the comparison to the first 250 ms (Figure 4D)? While the initial 250 ms may indeed capture critical visual information, it is common in EEG analysis to display the full 1000 ms post-stimulus data. Could you elaborate on this choice?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper devised a closed-loop framework to find the visual stimuli that can elicit specific neural activities. The authors models the whole process as an MDP, and proposed to use the interactive search (mind matching) and heuristic search (genetic algorithm) to solve the problem. While claimed general, the authors specify the framework to train the EEG encoding model to generate the synthesized EEG response and test it offline on the THINGS-EEG2 dataset. Visualized results demonstrate the possibility of the whole framework to find the appropriate visual stimuli in the search space. The authors also mentioned its possible impact and insights.\", \"soundness\": \"3\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"1. The whole framework is novel and interesting. It addresses the challenge to find the corresponding stimuli that can evoke a specific brain signal pattern. The framework may have the potential to be applied to a more realistic scenario.\\n2. The paper proposed two different settings for finding the visual stimuli: retrieval and generation, and provided corresponding solutions for them. \\n3. The overall findings may provide interesting neuroscience intuitions and may ignite further contributions.\", \"weaknesses\": \"1. One of the main claims by the authors is the adaptation of the whole close-loop framework. While the authors claim it can be simply replaced by recording EEG data from human participants, there are actually no more concrete demonstrations on how. For example, what is the \\\"specific neural activity in the brain\\\" in this paper and in a possible real scenario? What's the difference? And how difficult is it and how much effort will it take to apply the framework to the real world? It's always easy to just claim a methodology \\\"generalizable\\\", but without more justification that doesn't actually help strengthen the contribution of the paper.\\n2. Based on 1, I feel it is not sufficiently demonstrated in the paper what role the EEG plays in the whole framework. As far as I can understand from the current paper, it seems to be related to the reward $R$ in the MDP design, because it should provide signal based on the desired neural activities. However, we know neither how the reward is exactly calculated nor what kinds of the neural signal the authors are caring about (e.g., a specific frequency bank? a specific shape of waveforms? a specific activation from some brain area?). \\n3. Besides the methodology, it's also not clear how the different part of this framework performs and contribute to the final result from the experimental aspect. While in the result section, we can see that the framework can yield promising visual stimuli result, it lacks either quantitative experiments and comparison between selection of algorithms, or a more detailed explanations on the presented ones. (See questions.) Therefore, it's unclear for me what the exact performance of the whole framework and individual parts compared to other solutions.\\n4. Overall, the presentation of this paper is unsatisfying (and that's probably why I have the concerns in 2 and 3). On the one hand, the author is presenting more well-known details in the main content but didn't make their own claims clear. For example, the algorithm 1 and algorithm 2 is a direct adaptation from previous work. Instead of using space to present them, I wish to see more on how the MDP is constructed. On the other hand, mixing citations with sentences (please use \\\\citep instead \\\\cite) and a few typos (in line 222, algorithm 1, the bracket is not matched) give me the feeling that the paper is not yet ready to be published.\", \"questions\": \"1. What kind of the neural activity are you concerning in your experiment? How will you verify whether the activity is properly stimulated by your visual stimuli?\\n2. If the answer to the previous question is via the EEG encoder, then how can the encoder capture your concerned neural activity? How does encoder perform? How will the selection of the encoder influence the result?\\n3. What is the reward in the MDP?\\n4. For Figure 3.B, why do you choose subject 8 for demonstration? It seems the confidence interval is large. I wonder whether the similarity increase can pass the significance test.\\n5. How to interpret the spectrograms in Figure 4.C? I can't see the difference or some trends from the figure. \\n6. How is Figure 4.D obtained? Why does the \\\"random\\\" also look so good?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"I thank the authors for their responses to my questions. I've read through the revised PDF. I think this newer version of the paper is clearer than the previous version, but still feel there are several points where things are not very clear. Most figures are referred to in the main text, but not Figure 2, which is still confusing to look at (at least for me), and also not referred to directly by the main text. The main text of Section 3 I also find very hard to follow - even though it seems to provide more details, it is hard to parse. For example, I wasn't sure where the target category comes from - is this a category of the image, like dog vs cat? I thought the target was a specific neural activity pattern.\\n\\nI appreciate the details of the genetic algorithm, which I don't think were given in the previous iteration of the paper, but also wish more motivation and intuition were given for why this genetic algorithm was chosen to generate new images as opposed to some other algorithm.\\n\\nOverall, I will keep my rating as is because I think the paper still needs to be substantially clearer (although to be clear, I do think this is better than the previous version, and my only reason for keeping the rating the same is that my decision as to whether it is ready to be published in the current state is unchanged). I would like to see this paper published eventually, but just think that it needs further revision into a version that is much clearer and easier to follow.\"}", "{\"metareview\": \"This work presents a closed loop methodology to identify optimal visual stimuli for maximally eliciting neural activation in the brain. The method relies on a combination of diffusion image generation and Markov-Decision-Process-like updates, iteratively generating new images and updating the latent representation of image-induced activity. The authors then test their data on pre-recorded EEG datasets, and use a neural network model trained to generate EEG data given a stimulus to test their framework. The reviewers were split, appreciating the general mathematical formalization of the method and goals of the paper. Several weaknesses were mentioned, with a large majority falling into the category of either clarity or insufficiency of the experimental validation. While the authors improved the clarity in new versions of the manuscript, there remains some question of how well this method will work in real applications, where a number of practical challenges of EEG stability/drift, changes in noise etc. can potentially reveal unforeseen instabilities in the method. Thus I believe a more thorough validation and, especially, either an experimentally validated test or proof that the simulated EEG suffices, would be key to a future submission.\", \"additional_comments_on_reviewer_discussion\": \"There were a number of clarity issues that were discussed and seemingly clarified. One challenge that arose was that one reviewer was unclear on the exact changes in the updated manuscript. I suggest to the senior area chairs & above to consider \\\"freezing\\\" the original PDF and allowing the updated PDF to be uploaded, only replacing the original submission after the decisions have been made.\"}", "{\"title\": \"Response for revision\", \"comment\": \"We appreciate your consideration. ICLR has allocated a three-week period for rebuttal and discussion, allowing ample time for the reviewers' feedback to be thoroughly addressed and for the manuscript to be improved accordingly. Therefore, our revisions are aligned with the spirit and guidelines of ICLR.\"}", "{\"title\": \"Rebuttal by authors\", \"comment\": \"Thank you for your thoughtful review and for recognizing the importance of our work. The following is a point-by-point response to your review:\\n\\n**Q1: \\\".. the figures aren't referred to in the main text, so it was difficult to situate what the purpose of each figure was in the overall narrative of the paper.\\\"**\", \"a\": \"Thank you for pointing out this point we did not clarify. To avoid misunderstanding, we have changed the \\\"Selection\\\" in the Figure 1 to \\\"Preference\\\". Our framework is expected to be able to identify which image features (color, texture, background, etc.) are preferred according to the feedback of target EEG features, while the \\\"action\\\" involves deciding the next round of images according to the similarity score based on these similarity evaluations.\\n\\nIn the retrieval task, given that the exact query image is unknown, we initialize with a random set of images and iteratively narrow down to those with higher semantic similarity to the target. Therefore, the \\\"action\\\" refers to choosing which images in each iteration based on their similarity to the target brain features to construct the next stimulus set. The system progressively learns which features are most relevant to the target class by tracking similarities across iterations, assigning higher weights to these features in future steps.\\n\\nIn the generation task, \\\"action\\\" involves identifying images with high similarity to the target, where images with high similarity scores are kept and used as a basis for generating new samples. Then applying a genetic algorithm to cross and mutate these images, while ensuring they retain coherent, human-recognizable semantic content. \\n\\n**Q6: \\\"In Fig 2, the distance metric seems to be applied to images, but I thought the point was to compare induced and target neural activities.\\\"**\\n\\nThank you for pointing out this key information. We have improved Figure 2 to make the details of our framework more clear. The distance metric in Figure 2 is indeed derived from the distance between images. Our hypothesis is that this similarity between images maps the similarity between the induced and target neural activities (although the existence of Metamers is not excluded), allowing us to use image features as a proxy for brain activity. Even if this mapping relationship is intractable, we can still perform a heuristic solution in a gradient-free way. Therefore, we use the visual features of the image embedded in the feature space to approximate the desired target image, thereby continuously bridging the gap between the current stimulus image and the target brain activity.\"}", "{\"title\": \"Rebuttal by authors\", \"comment\": \"**Q6: \\\"All the figures and tables are not referenced in the main text, making it quite difficult to read the figures....\\\"**\", \"a6\": \"We have updated the figure captions and text to clarify these points, ensuring that the figures are fully understood in the context of the manuscript. Together, Figures 3c and 3d demonstrate the effectiveness of our framework in converging on images that match the target's neural representation through iterative refinement.\\n\\n**Q7: \\\"Are there any failure cases? What I can imagine includes: 1) the random samples in the first round roulette wheel fail to cover the target; 2) The generated images at a certain iteration fail to cover the target. The authors are encouraged to discuss this issue.\\\"**\", \"a7\": \"This is an interesting question. We considered this problem when designing the algorithm. Failure is certain to occur. Therefore, we also used a variety of means in the specific implementation process to ensure the diversity of the candidate sample set and ensure that the optimization will not continue to fall into Goal Drift.\\n\\nFirst, assuming the case of regulating the semantic representation of EEG under the retrieval task, if a round of random samples fails to cover the target, a new round of samples will be obtained through cumulative probability sampling. Since the first round of random sampling is uniformly distributed in space, no matter where the target is in the feature space, our algorithm always guides the samples taken out in the next iteration to move closer to the target direction. Even if the stimulation sample moves in the wrong direction in a certain iteration, the sampling probability of the wrong direction sample will be reduced, and the sampling direction will be gradually readjusted.\\n\\nOn the other hand, if the case is to regulate the strength of brain channels under the generation task, we adopt crossover and mutation operations at the image feature level to increase the diversity of the iterative population. At the same time, each round of iteration selects individuals with high fitness and retains them as candidate stimuli for the next round. After adding the genetic algorithm, our framework can help the generation model understand which stimulus image features are the preferences of the target EEG features, and guide the generation model to continuously update the image details in this direction.\\n\\n\\n\\n**Q8: \\\"...other factors can not be overlooked: 1) the limitation of EEG (low spatial resolution) in quantifying brain activity. It might be possible that different stimulus image evoke similar EEG responses due to the limitations of EEG. 2) The limitation of the model for EEG feature prediction (the encoding model 3.1)...\\\"**\", \"a8\": \"Thank you for your suggestion, we agree with your point of view. We chose EEG because of its low online acquisition cost and timely feedback from subjects. However, EEG signals are non-stationary and are greatly affected by factors such as equipment, environment, and psychological state of subjects. Therefore, there may be problems with inaccurate control of specific channels by stimulus images, so the real-time performance of the system is particularly important. Our encoder determines the feature type of EEG response, so even if metamers exist, the goal of approaching the target EEG response feature can still be achieved. In addition, since the performance of the EEG encoding model has a great impact on our framework, we supplemented the experiments in the Appendix to demonstrate the effectiveness of the encoding model used in the experiment.\\n\\n**Q9: \\\"...'quotation marks' are not in right format The font size of some text in the figures are too small to read. \\\"**\", \"a9\": \"Thank you for your suggestion, we have updated the manuscript to ensure that quotes are formatted correctly and to improve font size for better readability.\\n\\n**Q10: \\\"Typos: in Figure 4 captions: (F) is for O2 channel ?\\\"**\", \"a10\": \"Yes, Figure 4F represents the O2 channel. We have fixed this issue in the manuscript.\\n\\n\\nReferences\\n\\n[1] Walker E Y, Sinz F H, Cobos E, et al. Inception loops discover what excites neurons most using deep predictive models[J]. Nature neuroscience, 2019, 22(12): 2060-2065.\\n\\n[2] Bashivan P, Kar K, DiCarlo J J. Neural population control via deep image synthesis[J]. Science, 2019, 364(6439): eaav9436.\\n\\n[3] Pierzchlewicz P, Willeke K, Nix A, et al. Energy guided diffusion for generating neurally exciting images[J]. Advances in Neural Information Processing Systems, 2024, 36.\\n\\n[4] Luo A, Henderson M M, Tarr M J, et al. BrainSCUBA: Fine-Grained Natural Language Captions of Visual Cortex Selectivity[C]//The Twelfth International Conference on Learning Representations.\\n\\n[5] Luo A, Henderson M, Wehbe L, et al. Brain diffusion for visual exploration: Cortical discovery using large scale generative models[J]. Advances in Neural Information Processing Systems, 2024, 36.\"}", "{\"title\": \"Thanks for the revision\", \"comment\": \"Thanks for the authors' effort in answering my questions and revising the paper. I have read the revised paper thoroughly and it's much better. However, I cannot find the original manuscript so I'm not completely sure how much the paper differs from the previous version unless the authors highlight the change, and it looks like a brand new paper to me. Therefore, I suggest that the paper may need going through another round of review to get a fairer judgement and will keep my decision.\"}", "{\"title\": \"Rebuttal by authors\", \"comment\": \"Thank you for your thorough analysis and constructive feedback on our paper. Here are our point-by-point responses to your questions:\\n\\n**Q1: \\\"....What are the criteria and how to validate that the synthetic images are the \\u201coptimal\\u201d subset that can evoke specific neural activities? Similar issues exist in other modules, e.g., feature selection and interactive search. \\\"**\", \"a1\": \"We have explained it in detail in In Section 3.1 Closed-loop Framework. We determine whether the current image is close to the optimal stimulus by calculating the similarity score between the EEG features generated by the current stimulus and the target stimulus image. In addition, in order to avoid misunderstanding, we modified the section on \\\"Feature Selection\\\" in our original manuscript. Seciton 3.1 is now the design of the entire framework, including the definition of the EEG features approximated in this paper. Among them, \\\"Interactive Search\\\" is a case of EEG semantic representation in our closed-loop framework, which is more suitable for solving with an iterative algorithm based on retrieval rather than generation. We have carefully verified the effectiveness of each module. Please refer to the updated PDF for more details.\\n\\n**Q2: \\\"...The limitations of past studies on closed-loop neural encoding/decoding were not adequately justified, weakening the contribution of the study.\\\"**\", \"a2\": \"Thank you for your insight thoughts on this research. Most of the past research has stayed at the level of encoding and decoding methods of different modal neural data or behavioral data, but there are few studies that combine the characteristics of the two technologies to explore potential applications. Researches like Tolias and DiCarlo et al. on mice and monkeys confirmed that specific images can stimulate the activity of target neurons [1][2][3]. Luo et al.'s research on visual cortex selectivity revealed the potential of combined encoding/decoding research [4][5]. Based on previous research, this paper provides a closed-loop stimulus generation framework that introduces natural priors to further explore the potential of optimally designed stimuli in regulating brain activity. In Appendix A.1 and A.2 of the experiment, we give the results of the verification of the effectiveness of the encoding model and decoder to support the methodology we proposed. We strongly agree with your suggestions, and our future work will focus on exploring the limitations of closed-loop neural encoding/decoding to provide more support for this methodology.\\n\\n**Q3: \\\"...The subtitles are not well match the items in the framework, making the manuscript is not easy to follow.\\\"**\", \"a3\": \"According to your suggestion, we have updated the manuscript to better align the subtitles with the items in the framework, making it easier to follow. Additionally, the general writing improvements are addressed in the Global Response. See the updated pdf for more revised results.\\n\\n**Q4: \\\"...This module is very critical for the proposed framework. In addition, the implement details of the encoding model are not clear, e.g., was the model trained using individual data or data from multiple subject? How many training samples are used to train the encoding model? How to validate the model ?\\\"**\", \"a4\": \"To address your concerns, we have added verification of the effectiveness of the encoding model in Appendix A.1 and A.2. We have supplemented the detailed experimental configuration of the encoding model in Section 4.1. Due to the non-stationarity of EEG and differences between subjects, all our models are in-subject. The THINGS-EEG2 dataset is used to train, test, and validate our encoding model. The dataset includes paired visual and evoked EEG data from 10 different subjects. For each subject, the training set contains 1,654 different visual stimulus objects (10 images per object) and the corresponding EEG responses. The test set contains 200x12 image samples and 200x1 EEG data. Detailed information on the specific definition of the black box encoding model is provided in Section 3.1, please refer to the updated pdf for more information.\\n\\n**Q5: \\\"Is the EEG encoder which has been aligned with CLIP image features a good choice? This alignment may introduce bias in feature representation of the target and generated EEG signals. Why not a naive EEG encoder ?\\\"**\", \"a5\": \"The EEG Encoder in our Figure 1 is a general representation, and its specific structure depends on the type of EEG feature case to be regulated. For the case of EEG semantic features in Section 3.3, the EEG feature extractor uses the EEG Encoder that has been aligned with the CLIP image features. However, for the case of channel-wise channel intensity features in Section 3.4, the EEG feature extractor calculates the PSD features of different channels. Therefore, this EEG Encoder can be a feature predictor that aligns arbitrary features, not limited to alignment with images or channel energy.\"}", "{\"comment\": \"thanks for the clarification on Q2 and the discussion on Q1. I am not sure if I agree with our discussion of Q1. Here I meant an algorithmic/computational proof of optimality, but I also understand that this might be beyond the focus of the current paper. My suggestion is to make this more clear in your paper to avoid confusion. I keep my score as it is, this is a very nice paper and I will be happy to see it in ICLR.\"}", "{\"title\": \"Rebuttal by authors\", \"comment\": \"**Q1: \\\"What is the \\\"specific neural activity in the brain\\\" in this paper and in a possible real scenario? What's the difference? And how difficult is it and how much effort will it take to apply the framework to the real world? \\\"**\", \"a1\": \"We used the EEG data of each subject's training set to train the encoding model, and synthesized its EEG signal on the test set to simply replace the EEG data of human subjects. Because the real and specific neural activities of the subjects are complex and it is difficult to predict their dynamic characteristics for a long time, we compressed the specific neural activities into neural representations and approximated their neural activities by approximating the target neural representations.\\n\\nAt present, the difficulty in applying this framework to the real world for online closed-loop control is that the amount of data from multiple rounds of feedback from each subject is scarce, and the quality of the collected signals is poor due to the influence of factors such as the environment and equipment. These factors lead to a slow convergence of the final algorithm. In addition, methods to improve the overall performance of the framework include finding suitable neural representations or using more advanced encoding models.\\n\\n**Q2: \\\"...we know neither how the reward is exactly calculated nor what kinds of the neural signal the authors are caring about.\\\"**\", \"a2\": \"Thank you for your careful consideration of this question. Our original manuscript missed some important details. We have revised the manuscript comprehensively. The specific revisions can be found in Global Response Q3. Unlike those works that directly control human behavior by generating images (Wei et al.), we introduce EEG as an observation of brain activity and control neural activity by controlling different types of neural representations of EEG. In Section 3.1, the reward R in the MDP design is the similarity score of the EEG features generated by the stimulus image of the current state and the target image. We introduce two different cases in Sections 3.3 and 3.4: EEG semantic representation and EEG channel energy representation, respectively. EEG semantic representation measures the information in the EEG corresponding to the semantics of the image category, which is obtained by pre-trained EEG Encoder aligned with CLIP. EEG channel energy representation corresponds to the PSD feature of a certain channel, reflecting the activation degree of the brain area, and is obtained by direct calculation.\\n\\n**Q3: \\\"...It lacks either quantitative experiments and comparison between selection of algorithms, or a more detailed explanations on the presented ones. ...it's unclear for me what the exact performance of the whole framework and individual parts compared to other solutions.\\\"**\", \"a3\": \"Thank you for pointing out this issue. We have revised the manuscript to clarify more technical details. In addition, we have added quantitative experiments in the Appendix A.2 and A.3, including the comparison of different encoding models, the verification of the effectiveness of the encoder and the quantitative results of all subjects. Our comprehensive revisions can be found in the updated PDF.\\n\\n**Q4: \\\"I wish to see more on how the MDP is constructed. On the other hand, mixing citations with sentences (please use \\\\citep instead \\\\cite) and a few typos (in line 222, algorithm 1, the bracket is not matched) ...\\\"**\", \"a4\": \"Thank you for your suggestion, we have revised the manuscript and reported it in Global Response Q4. Please refer to our latest version PDF for more details.\\n\\n**Q5: \\\"What kind of the neural activity are you concerning in your experiment? How will you verify whether the activity is properly stimulated by your visual stimuli?\\\"**\", \"a5\": \"Our framework attempts to summarize different neural activities into electrophysiological features, and use this type of features to guide the design of visual stimulation. Based on this, we proposed two cases, corresponding to the visual semantic features of EEG and channel-wise channel intensity features. Our purpose is to verify the effectiveness of our closed-loop iterative framework without caring about the performance of the above feature extractor itself. The encoder encodes the EEG generated by brain activity into semantic representations or energy features.\\n\\nTherefore, we can determine whether the current visual stimulation is a better stimulation by calculating the similarity score between the EEG corresponding features generated by the visual stimulation and the target features. From the control results of different channels in Figure 4, it can be seen that the EEG generated by the visual stimulation we designed is very close to the target EEG, reflecting the similarity of the brain activities of the two. Our framework provides guidance for online closed-loop neural control experiments on real subjects.\"}", "{\"title\": \"Rebuttal by authors\", \"comment\": \"We appreciate your thorough review and valuable suggestions on our revised manuscript. Below are some of the changes we made and our point-by-point responses to your questions:\\n\\n**Q1: \\u201c...Most figures are referred to in the main text, but not Figure 2, which is still confusing to look at (at least for me), and also not referred to directly by the main text. \\u201d**\", \"a\": \"Thank you for your suggestions. We have added new annotations to **Figure 1**, hoping they will help clarify our whole framework. Additionally, we have polished **Section 3 Method** in the manuscript accordingly to make it more understandable and logical.\"}", "{\"comment\": \"I would like to thank the authors for addressing my concerns. They have made substantial revisions such that it looks like a new submission to me. Most of my concerns have been addressed in the revision, however, I am not sure if this is the true spirit of rebuttal.\"}", "{\"title\": \"Global Response\", \"comment\": \"We are sincerely grateful to the reviewers for dedicating their time and effort to review our work. We are excited about the consensus among the reviewers regarding the novelty (FvdH, CRyH, EgtU, 4DNY) and significance (FvdH, 4DNY) of our work. Unfortunately, our past submission has been plagued by clerical errors and lack of clarity. So based on suggestions from reviewers, we have conducted a comprehensive revision of our manuscript to clarify the methodology, add missing technical details and polish our paper. We hope to do our best to clarify what we did not explain clearly in the manuscript to allay the concerns of the reviewers. In the following, we provide a Global Response to the reviewers' questions and concerns.\\n\\n**Q1: What is the motivation of this work, particularly with regard to the establishment of the closed-loop system and black-box modeling?**\", \"a1\": \"In a closed-loop feedback system, brain activity is influenced by numerous factors, making it highly susceptible to interference. As a result, EEG signals recorded from a single stimulus may not fully capture the desired brain response. This necessitates multiple rounds of stimulation to reinforce and stabilize the response. As mentioned in the Introduction, it has been seen as an significant technique to focus on closed-loop regulation, both at the level of individual neurons and in larger-scale EEG systems. However, these studies have some limitations, such as lack of natural priors and generalization which human beings can understand. This underscores the necessity of the framework we propose.\\n\\nThe black-box modeling we show in **Figure 1** is to verify the effectiveness of our training-free closed-loop framework. We assume that our framework still work even the specific structure of the encoding model is unknown. This can help us ignore the gains brought by the encoding model and focus on the advancement of the framework itself. So we use a simple encoding model with frozen weights, without the calculation of gradients. Given that the human brain does not provide direct access to neuronal activation patterns or the sequence of bioelectrical responses, these processes are not easily derivable. Therefore, in order to more generally simulate the specific process of our method in regulating human brain responses, it becomes necessary and meaningful to introduce a black-box model.\\n\\n**Q2: How is the EEG encoding model trained to ensure the reliable synthesis of EEG?**\", \"a2\": \"In our framework, the validity and diversity of EEG signals are prioritized over strict reliability. The framework aims to regulate target-specific neural activity of individuals. The primary requirement for our pretrained encoding model is to encode natural image to EEG, to approximate the brain activity in response to visual perception. Previous studies have shown that EEG signals evoked by visual stimuli can be aligned with stimuli, which can be effectively modeled for downstream tasks such as decoding and reconstruction.\\n\\n**Q3: How is the MDP algorithm implemented in this work, including a detailed explanation of the actions, rewards, and other relevant components?**\", \"a3\": \"Our closed-loop stimulus generation framework only depends on the current state and the current action (the current image search space), which has uncertainty and randomness in the search space. At the same time, the feedback of the action (selecting the stimulus image) and the reward (similarity score) is clear, so we model the entire iterative process through MDP. When performing retrieval as a setting for finding visual stimuli, the space of states (current stimulus sequence) and actions (selecting stimulus) is explicitly limited, that is, the complexity of the problem is controllable. When using a generative model and combining iterative generation of visual stimuli with evolutionary computation, we set the population fitness (similarity score threshold) as the condition for terminating evolution, so that the space of actions is also limited in this framework.\\n\\n**Q4: Figures and tables are not referenced, making it quite difficult to read the figures. The presentation lacks clarity in certain areas, making the manuscript is not easy to follow.**\", \"a4\": \"Below, we outline the key changes made in response to your comments. A modified version of the manuscript for further clarification has been uploaded in the **PDF**.\\n\\n(1) We have revised the full manuscript, unified the mathematical notation, and refered each figure and table in a logical manner. Moreover, we have supplemented with more implementation details (**Section 4**). \\n\\n(2) We have updated the **Figure 1**, **Figure 2** and their captions to make our entire framework more clear. \\n\\n(3) In **Section 3.3** and **Section 3.4**, we have updated the description of the algorithm (or pseudocode) in two settings (retrieval and generation) to for better understanding to our implementation details and framework.\"}", "{\"title\": \"Response to revision\", \"comment\": \"Dear Reviewer,\\n\\nWe kindly request your feedback on whether our response, along with the revised manuscript, sufficiently addresses your concerns. Thank you again for your time and consideration.\"}", "{\"title\": \"Response to Reviewer FvdH\", \"comment\": \"Sorry to bother you. We are about to run out of time to respond.\\n\\nWe have made complements and explanations for this work with the help of all the reviews. We would be grateful if you could confirm whether the rebuttal meets your expectations and if there is any other suggestion.\\n\\nThank you once again for your time and insightful comments!\"}", "{\"summary\": \"The authors proposed a closed-loop stimulation framework for EEG-based visual encoding, aiming to generate visual stimuli to elicit specific neural activities through controllable image generation strategy. In this framework, the authors control the stimulus image generation by approximating the brain activity evoked by the visual stimulation towards the desired neural response that corresponds to the candidate images rated by human users iteratively. Controlling visual stimuli in visual encoding studies is very important. Meanwhile, the stimulus images in most prior studies are relatively arbitrary as there is no standard criteria. The proposed framework provides a possible solution to this problem.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"The proposed closed-loop framework for synthetic visual stimuli generation in novel in several ways, in terms of the retrieval strategy for identifying candidate images, the feature selection approach, and the method to addressing the problem of unknown target query image. The framework and related methodologies are well designed and presented in general.\", \"weaknesses\": \"The weaknesses of the manuscript lie in the lack of details and validations. For example, the details of encoding model are not sufficient. The authors described the architecture of the encoding model, however, the details for training such an encoding model is missing. he authors should provide details about training procedure, including data sources. Was the encoding model trained using data from multiple subjects or was it subject-specific? What is the method to validate the encoding model? More importantly, the encoding model was not adequately validated (at least I didn\\u2019t see any results related to the encoding model) given its critical role in the framework. In addition, what are the criteria and how to validate that the synthetic images are the \\u201coptimal\\u201d subset that can evoke specific neural activities? Similar issues exist in other modules, e.g., feature selection and interactive search. The authors are encouraged to validate each module separately rather than integratively.\", \"questions\": \"1. The limitations of past studies on closed-loop neural encoding/decoding were not adequately justified, weakening the contribution of the study.\\n2. The subtitles are not well match the items in the framework, making the manuscript is not easy to follow.\\n3. The encoding model has not been adequately validated. This module is very critical for the proposed framework. In addition, the implement details of the encoding model are not clear, e.g., was the model trained using individual data or data from multiple subject? How many training samples are used to train the encoding model? How to validate the model?\\n4. Is the EEG encoder which has been aligned with CLIP image features a good choice? This alignment may introduce bias in feature representation of the target and generated EEG signals. Why not a naive EEG encoder? \\n5. All the figures and tables are not referenced in the main text, making it quite difficult to read the figures. For example, what is encoded by the dot size in Figure 3c? What is the image with red boundary in Fig. 3d step 10?\\n4. Are there any failure cases? What I can imagine includes: 1) the random samples in the first round roulette wheel fail to cover the target; 2) The generated images at a certain iteration fail to cover the target. The authors are encouraged to discuss this issue. \\n6. \\u201cSince different stimulus images in our framework can produce the same or similar EEG features\\u201d\\u2014this could attribute to the existence of Metamers. However, other factors can not be overlooked: 1) the limitation of EEG (low spatial resolution) in quantifying brain activity. It might be possible that different stimulus image evoke similar EEG responses due to the limitations of EEG. 2) The limitation of the model for EEG feature prediction (the encoding model 3.1). The authors are encouraged to make justifications more carefully.\", \"other_issues\": \"\\u201cquotation marks\\u201d are not in right format\\nThe font size of some text in the figures are too small to read.\", \"typos\": \"in Figure 4 captions: (F) is for O2 channel?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Rebuttal by authors\", \"comment\": \"**Q6: \\\"If the answer to the previous question is via the EEG encoder, then how can the encoder capture your concerned neural activity? How does encoder perform? How will the selection of the encoder influence the result?\\\"**\", \"a6\": \"As mentioned in Q1, a good encoder can obtain more effective representation of EEG activity, but our framework does not care about the type of EEG encoder. The contribution of this paper is to propose a new framework that introduces natural priors, which makes it possible to design optimal stimuli to regulate brain activity. Therefore, effective features will make the performance of our framework grow faster. Future research can focus on how to select effective embedding encoders to represent specific types of neural activities and further promote the application of this framework.\\n\\n**Q7: \\\"What is the reward in the MDP?\\\"**\", \"a7\": \"For this question, please refer to Q3 in Global Response.\\n\\n**Q8: \\\"For Figure 3.B, why do you choose subject 8 for demonstration? It seems the confidence interval is large. I wonder whether the similarity increase can pass the significance test.\\\"**\", \"a8\": \"Due to limited space in the main body, we present the relevant results only for Subject 8. We report the complete performance of all subjects in the updated Appendix. Please review our updated pdf for more details.\\n\\n**Q9: \\\"How to interpret the spectrograms in Figure 4.C? I can't see the difference or some trends from the figure.\\\"**\", \"a9\": \"As you said, the changes in the spectrogram in Figure 4.C of our manuscript are not intuitive. We consider that this is mainly due to the performance of the encoding model and the fact that our framework relies heavily on the image feature similarity of CLIP. Although it is difficult to see the difference in the frequency-time diagram of the synthetic EEG data, the trend of the stimulus image approaching the target image can still be clearly perceived through the changes in the stimulus image at the bottom of Figure 4.C.\\n\\n**Q10: \\\"How is Figure 4.D obtained? Why does the \\\"random\\\" also look so good?\\\"**\", \"a10\": \"The random EEG and target in Figure 4 of our manuscript also overlap a lot. This is because we use the encoding model instead of the real data of the brain, so it is limited by the prediction performance of the model itself. However, improving the encoding model is not the purpose of this article. In order to prove the validity of our framework, we added verification on the validity of the encoding model in the Appendix.\\n\\nIn addition, Figure 4 of the work of Gifford et al. [1] shows that using the encoding model built end-to-end with CNN, the prediction accuracy of the zero-shot decreases rapidly from 0.4s onwards, which may be induced by visual stimulation. The real brain response has been basically completed, so the prediction results after nearly 100 data points are basically unreliable. Future work can continue to improve the performance of the encoding model on zero-shot EEG prediction, which will significantly improve the performance of our framework.\\n\\nReferences\\n\\n[1] Gifford A T, Dwivedi K, Roig G, et al. A large and rich EEG dataset for modeling human visual object recognition[J]. NeuroImage, 2022, 264: 119754.\"}", "{\"title\": \"Response to Reviewer 4DNY\", \"comment\": \"Dear reviewer,\\n\\nWe are grateful for the time you've taken to carefully read our revision and provide valuable feedback. In order to provide greater clarity regarding the changes we made, we have updated the list of modifications in the **Global Paper Revision**. We hope this will facilitate a more efficient comparison between the two versions.\\n\\nThe ICLR committee has kindly granted us three weeks to revise our manuscript, a valuable opportunity to further improve its quality. We are also pleased to note that both reviewers EgtU and FvdH have acknowledged the significant improvements made in the revised version. We respectfully request that you reevaluate our manuscript in light of these revisions. \\n\\nWe are more than willing to address any remaining concerns or questions you may have within the allotted timeframe.\\n\\nThank you for your time again.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"I apologize for my delayed response to the authors. I appreciate that they have responded to each of my comments and made efforts to make the paper clearer in all the ways suggested, and will increase my score accordingly.\"}" ] }
4l3AH8Bhmt
Revealing and Mitigating Over-Attention in Knowledge Editing
[ "Pinzheng Wang", "Zecheng Tang", "Keyan Zhou", "Juntao Li", "Qiaoming Zhu", "Min Zhang" ]
Large Language Models~(LLMs) have demonstrated superior performance across a wide range of tasks, but they still exhibit undesirable errors due to incorrect knowledge learned from the training data. To avoid this, knowledge editing methods emerged to precisely edit the specific model knowledge via efficiently modifying a very small percentage of parameters. However, those methods can lead to the problem of **Specificity Failure**, where the existing knowledge and capabilities are severely degraded due to editing. Our preliminary indicates that Specificity Failure primarily stems from the model's attention heads assigning excessive attention scores to entities related to the edited knowledge, thereby unduly focusing on specific snippets within the context, which we denote as the **Attention Drift** phenomenon. To mitigate such Attention Drift issue, we introduce a simple yet effective method **S**elective **A**ttention **D**rift **R**estriction(**SADR**), which introduces an additional regularization term during the knowledge editing process to restrict changes in the attention weight distribution, thereby preventing undue focus on the edited entity. Experiments on five frequently-used strong LLMs demonstrate the effectiveness of our method, where SADR can significantly mitigate Specificity Failure in the predominant knowledge editing tasks.
[ "model editing", "mechanistic interpretability", "NLP", "language models" ]
Accept (Poster)
https://openreview.net/pdf?id=4l3AH8Bhmt
https://openreview.net/forum?id=4l3AH8Bhmt
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yszHM86L19", "yqhLSuTbi2", "xbfEQREFlq", "xM1Q0uC6JV", "x4Zsm9Kiwk", "tP6D5zv64b", "krpk9ayRRx", "hhnw4fD0BR", "feuaL5XJxp", "eOvx3rJu1E", "ZOmv9GLXpV", "TWUp9RgQFJ", "OSwgOs5LMC", "NkAFVJLA9a", "LxMo58CJAd", "I2qKw9GYh0", "Cm3mXXInya", "Bz9thxfuTM", "7cu2PNaAMV", "5NX3QVo7k6", "1CCJZW0Mht", "0AsOfh6irU" ], "note_type": [ "decision", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1737524099164, 1732148888533, 1730273605351, 1732394989146, 1732149049822, 1732324950185, 1732148126104, 1732546640840, 1732147261242, 1734698871752, 1729820669557, 1732412194573, 1732147535581, 1732148615444, 1730315060457, 1732152996182, 1730618702858, 1732566428160, 1732541819879, 1732147618963, 1732148929296, 1732148404250 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission11041/Authors" ], [ "ICLR.cc/2025/Conference/Submission11041/Reviewer_DhFe" ], [ "ICLR.cc/2025/Conference/Submission11041/Authors" ], [ "ICLR.cc/2025/Conference/Submission11041/Authors" ], [ "ICLR.cc/2025/Conference/Submission11041/Reviewer_DhFe" ], [ "ICLR.cc/2025/Conference/Submission11041/Authors" ], [ "ICLR.cc/2025/Conference/Submission11041/Reviewer_DhFe" ], [ "ICLR.cc/2025/Conference/Submission11041/Authors" ], [ "ICLR.cc/2025/Conference/Submission11041/Area_Chair_XCjZ" ], [ "ICLR.cc/2025/Conference/Submission11041/Reviewer_pTRJ" ], [ "ICLR.cc/2025/Conference/Submission11041/Reviewer_DhFe" ], [ "ICLR.cc/2025/Conference/Submission11041/Authors" ], [ "ICLR.cc/2025/Conference/Submission11041/Authors" ], [ "ICLR.cc/2025/Conference/Submission11041/Reviewer_RScL" ], [ "ICLR.cc/2025/Conference/Submission11041/Reviewer_pTRJ" ], [ "ICLR.cc/2025/Conference/Submission11041/Reviewer_pEaf" ], [ "ICLR.cc/2025/Conference/Submission11041/Reviewer_RScL" ], [ "ICLR.cc/2025/Conference/Submission11041/Authors" ], [ "ICLR.cc/2025/Conference/Submission11041/Authors" ], [ "ICLR.cc/2025/Conference/Submission11041/Authors" ], [ "ICLR.cc/2025/Conference/Submission11041/Authors" ] ], "structured_content_str": [ "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"Response to Reviewer pTRJ(Part I)\", \"comment\": \"Thank you for your detailed review and valuable comments. Here are our responses to your concerns:\\n\\n> - W1: Why is KL divergence used to constrain the two attention weights?\\n\\nAttention weights are also commonly referred to as attention distributions, representing the model\\u2019s allocation of attention across different token positions, and they sum to 1. KL divergence is a widely used metric for measuring differences between two probability distributions and has been utilized in prior works [1,2] to quantify differences in attention weights. Therefore, using KL divergence to constrain two attention weights is a natural choice and aligns with the findings in Section 3.3 of our experiments, which show that attention drift, as measured by KL divergence, is positively correlated with specificity failure.\\n\\n[1] Interpreting Self-Attention Weights\\n[2] Interpreting Attention Models with Human Visual Attention in Machine Reading Comprehension\\n\\n> - W2: The decline in generalization.\\n\\nFirst, PM measures the probability assigned to the target word of the output. In the unedited model, even when the model correctly knows the knowledge (e.g., assigns the highest probability to the true object), the average probability of outputting that word is only around 10%. At the same time, the model may predict other linguistically plausible tokens, such as articles or alternative phrasings, as part of the natural language modeling process. While after editing, the probability assigned to the edited object often becomes disproportionately high, which might indicate potential overfitting. Under some setups, such as GPT-J-ROME, while PM decreases by as much as 20 points, the PS metric only drops by approximately 3 points. This demonstrates that under paraphrase tasks, the loss of knowledge in our methods remains minimal.\\n\\nRegarding the decline in generalization, it is important to note that prior evaluation systems often overlooked specificity failure, achieving nearly 100% generalization at the cost of severe overfitting. What we aim to accomplish is to ensure that the knowledge retrieval process after editing remains as safe and as close to the original Transformer\\u2019s knowledge extraction mechanism as possible. We also believe that a stable knowledge editing method is more critical than achieving near 100% generalization accuracy.\\n\\n> - W3: Ablation on head selection\\n\\nIn Figure 6, we report the Edit Success and Specificity performance across 1,683 editing instances under five different $\\\\gamma$ parameters $\\\\gamma=50, 100, 200, 400, 800$. The p-value analysis further demonstrates that the attention head selection method achieves statistically significant improvements, as shown below:\\n\\n| Metric | $\\\\gamma=50$ | $\\\\gamma=100$ | $\\\\gamma=200$ | $\\\\gamma=400$ | $\\\\gamma=800$ |\\n| ----------- | ----------- | ------------ | ------------ | ------------ | ------------ |\\n| Success | 5.0e-09 | 1.2e-13 | 5.1e-16 | 3.7e-41 | 2.9e-73 |\\n| Specificity | 7.6e-04 | 6.7e-06 | 1.1e-09 | 3.7e-11 | 1.9e-11 |\\n\\nWe also evaluated the performance of randomly selecting the same number of attention heads as used in SADR. Below are the Edit Success and Specificity scores:\\n\\n**Edit success:**\\n\\n| Metric | $\\\\gamma=50$ | $\\\\gamma=100$ | $\\\\gamma=200$ | $\\\\gamma=400$ | $\\\\gamma=800$ |\\n| --------------------- | ----------- | ------------ | ------------ | ------------ | ------------ |\\n| w/ head selection | 98.48 | 98.06 | 97.85 | 97.58 | 97.67 |\\n| w/o head selection | 98.03 | 97.55 | 97.20 | 96.69 | 96.63 |\\n| Random head selection | 98.60 | 98.18 | 97.88 | 97.51 | 97.33 |\\n\\n**Specificity:**\\n\\n| Metric | $\\\\gamma=50$ | $\\\\gamma=100$ | $\\\\gamma=200$ | $\\\\gamma=400$ | $\\\\gamma=800$ |\\n| --------------------- | ----------- | ------------ | ------------ | ------------ | ------------ |\\n| w/ head selection | 51.48 | 51.87 | 52.32 | 52.52 | 52.48 |\\n| w/o head selection | 51.09 | 51.24 | 51.04 | 51.09 | 51.97 |\\n| Random head selection | 50.57 | 51.18 | 51.35 | 51.38 | 51.55 |\\n\\nThese results show that while generalization performance is comparable between random head selection and SADR, SADR demonstrates a consistent advantage in specificity across all \\n$\\\\gamma$ values (p-value less than 0.05).\"}", "{\"summary\": \"The author finds that the existing knowledge editing methods tend to spend over attention on the knowledge that has already been edited. This leads to failure in the model's answers when the edited subject appears in context (Specificity Failure). This article takes the first step towards alleviating specificity failure, which consists of two parts: 1) Investigating the reason for specificity failure; 2) Proposing a new loss function. In the first part, the author first finds that the last token of the edited subject leads to attention drift and then proposes a preliminary solution to alleviate specificity failure. Based on the above findings, this paper proposes a new method (SADR) in the second part, which effectively mitigate the specificity failure.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The paper is well-motivated: it explores the reasons behind the Specificity Failure observed in edited models, and proposes an effective solution to address this issue.\", \"SADR is generalizable: by incorporating an additional loss function, the SADR can be applied to various knowledge editing techniques.\", \"The article is well-structured: it first identifies specificity failure through guided experiments and then delves into the causes of specificity failure. Finally the paper proposes solution.\", \"The ablation study proves the effectiveness of the method.\"], \"weaknesses\": \"**Main Weaknesses**\\n* *W1*: I suggest conducting additional experiments on Mquake [1] to prove the effectiveness of the method. Recent research [1] has shown that existing knowledge editing methods are not good at multi-hop editing. For example, when we edit a piece of knowledge from *<CountryX, Prime_Minister, PersonY>* to *<CountryX, Prime_Minister, PersonZ>*, the corresponding knowledge *<CountryX, First_Lady, PersonY's wife>* should also be changed to *<CountryX, First_Lady, PersonZ's wife>*. Based on the paper's findings, the failure of multi-hop questions is because the edited model's over-attention on the subject CountryX. So I'm curious about whether SADR can effectively solve the above-mentioned problems. \\n\\n**Minor Weaknesses**\\n* *W2*: I notice that in Line 165, the editing target is represented as $o^*$, while in other places it is represented as $o_{edit}$. Perhaps changing all occurrences of $^*$ to $_{edit}$ can improve the readability of the article.\\n\\n* *W3*: In Table 2 *Relation*, Equation 3 seems to have extra 'xs'. \\n\\n**Missing References**\\n* Knowledge Editing for Large Language Models: A Survey. (2023)\\n* A Survey on Knowledge Editing of Neural Networks. (2023)\\n\\n$Ref$:\\n\\n[1] Mquake: Assessing knowledge editing in language models via multi-hop questions. (2023)\", \"questions\": [\"**Main Questions**\", \"*Q1*: It would be better if the author could point out the reasons that lead to attention drift. One possible reference could be: after editing, the norm of the model parameters $\\\\hat{W}$ increases, causing the norm of the hidden layer vector $v^*$ to grow. This leads to an enhanced attention on the last token towards the edited subject.\", \"*Q2*: Compared to conventional editing methods, how much additional time overhead does SADR incur? I noticed that SADR computes the attention weights for each layer before editing.\", \"**Minor Questions**\", \"*Q3*: I notice that $\\\\mathcal{L}_{SADR}$ traverses all layers $l$ in Equation (2). So my question is: is it possible to achieve the same result by restricting attention weights of only one or a few layers?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer DhFe\", \"comment\": \"Thank you for recognizing our work!\\n\\nRegarding your second point, it actually remind us of some experiments we conducted earlier. Initially, after discovering the close correlation between attention drift and specificity failure, we tried using `torch.detach` to prevent gradient propagation on attention weights to alleviate specificity failure. However, the experimental results showed a polarized outcome: the editing either perform very well\\u2014outputting $o_{edit}$ with high probability while maintaining specificity\\u2014or completely fail, outputting $o_{edit}$ with very low probability. \\n\\nWe also observed that the original ROME method is more prone to specificity failure on these challenging test cases. This might suggest that for some stubborn knowledge, editing methods tend to use a hard-coding approach to integrate them into the model's forward propagation. We have included additional discussions and experimental results on Attention Drift in Appendix D.4. Thank you again for your response.\"}", "{\"title\": \"General Response\", \"comment\": \"We sincerely thank all reviewers for their thoughtful and constructive feedback, as well as the time and effort in reviewing our work. We are delighted to see that all the reviewers recognize the importance of specificity failure as a critical issue in knowledge editing. Additionally, many reviewers acknowledge our analysis of specificity failure and affirm the generalizability and effectiveness of SADR.\\n\\nFor each reviewer\\u2019s concerns, we have provided detailed clarifications. We hope that our responses successfully address the remaining issues, and we are happy to answer any additional questions during the discussion phase.\\n\\nWe have also submitted a **revised version of the manuscript**, with the following updates:\\n\\n1. Fixed typos and resolved missing references, as pointed out by reviewers DhFe and pEaf.\\n2. Added an efficiency analysis of SADR in the appendix.\\n3. Included additional correlation analyses in the appendix, investigating more factors related to specificity failure. The results further demonstrate that attention drift has a more direct impact on specificity failure compared to other factors.\\n4. Added a discussion on the reasons for attention drift in the appendix.\"}", "{\"title\": \"Response to Submission11041 Authors\", \"comment\": \"Tanks for your reply! Some of my concern has been addressed. I wish I could raise the score by one point, but it is very unfortunate that there is no option for 7 in ICLR. However, this does not affect my belief that this is a good paper.\\n\\nAlso, here are some additional responses:\\n\\n> The results suggest that the shift or norm of the hidden state vector is weakly correlated with attention drift. In fact, the implemention of ROME already constrains the shift in hidden state vectors during optimization by introducing the clamp_norm_factor. \\n> \\n\\nIn a previous reply, I pointed out that the norm of the model might be one reason affecting the performance. This is because I have conducted experiments in the past and found that: with the increase in the number of edits, although the clamp_norm_factor is used, the norm of the model will inevitably become larger, some existing results can also corroborate this view [1, 2].\\n\\n> The primary cause of attention drift likely lies in the optimization objective, which hard-codes the knowledge into the model\\u2019s forward propagation rather than enabling a more natural and reasonable assimilation of new knowledge.\\n>\\nI hope that the author can validate this view in the final version.\\n\\nIn summary, I like your views.\\n\\n$Ref$:\\n\\n[1] Model Editing Harms General Abilities of Large Language Models: Regularization to the Rescue. (ACL 2024)\\n\\n[2] Model Editing at Scale leads to Gradual and Catastrophic Forgetting. (EMNLP 2024)\"}", "{\"title\": \"Response to Reviewer RScL(Part III)\", \"comment\": \"> - W3: Comparison between the attention-based and decoding-based constraint methods.\\n\\nDeCK [1] primarily encourages the model to output words with greater differences from the original distribution by employing contrastive decoding, thereby increasing the model's confidence in the edited facts. However, DeCK will increase the model's confidence in its outputs, which leads to more severe specificity failure. Therefore, the formula needs to be modified to constrain the divergence between the model\\u2019s output and the original distribution. By modifying the Equation 6 in [1] as follows:\\n\\n$\\\\mathcal F(\\\\mathbb P_{\\\\text{edited}}(x), \\\\mathbb P_{\\\\text{unedited}}(x)) = \\\\log \\\\mathbb P_{\\\\text{edited}}(x) - 0.5 \\\\log \\\\left(\\\\frac{\\\\mathbb \\n P_{\\\\text{edited}}(x)}{\\\\mathbb P_{\\\\text{unedited}}(x)}\\\\right),$\\n\\nwe can constrain the difference between the edited and unedited distributions during decoding. We refer to this approach as constrained decoding, and the results are as follows:\\n\\n| **Editor** | **ES** | **PS** | **NS** | **RS** | **DNS** |\\n| -------------------------------- | ------ | ------ | ------ | ------ | ------- |\\n| **None** | 20.86 | 17.70 | 82.43 | 79.73 | 61.99 |\\n| **ROME** | 99.88 | 99.58 | 80.26 | 11.94 | 30.42 |\\n| **+ contrastive decoding(DeCK)** | 100.00 | 99.94 | 26.12 | 0.39 | 11.94 |\\n| **+ constrained decoding** | 93.09 | 94.46 | 80.73 | 40.28 | 41.84 |\\n| **+ SADR** | 99.76 | 96.36 | 80.86 | 27.75 | 49.32 |\\n\\nAlthough this approach significantly improves the metrics for the Relation task, its performance on all other tasks is decreased. Using decoding methods alone cannot fundamentally address the model's specificity failure and may also significantly harm the success rate of edits.\\n\\n$Ref:$\\n\\n[1] Decoding by Contrasting Knowledge: Enhancing LLMs\\u2019 Confidence on Edited Facts.\"}", "{\"comment\": \"Tanks for your reply! Finally, I decide to raise my score! I believe this paper will have a significant impact on this field.\"}", "{\"title\": \"Response to Reviewer pEaf\", \"comment\": \"We greatly appreciate your insightful feedback and constructive suggestions, and thank you for your recognition of our work. Here are our responses to your concerns:\\n\\n> - Weakness: Why the experiments are limited to locate-then-edit methods? Typo line 47: Paris\\n\\nDue to paper length constraints, we focuse our analysis of specificity failure and experiments with SADR on locate-then-edit methods to maintain consistency and clarity. We choose the locate-then-edit approach in the main text because it is a mainstream method in knowledge editing, offering state-of-the-art performance across many benchmarks with low computational demands. To illustrate the generalizability of the specificity issue and the proposed SADR method, we conduct extended experiments across more editing methods and datasets in the appendix. \\n\\nThe typo issue has been corrected in the revised version of the paper.\\n\\n> - Questions: Have parameter preserving or meta-learning methods also been investigated? What's the RS/RM and DNS/DNM scores for methods like GRACE or ICE? Adding the scores for MEMIT and PMET to Table 1.\\n\\nIn Appendix E2, we provide metrics for WISE and MEND, which represent these two categories of methods. Our analysis shows that specificity failure remains a significant issue across these approaches, as evidenced by their RS and DNS scores compared to the original model.\\n\\nWe have also evaluated the GRACE method on our tasks, with the results summarized below:\\n\\n| **Editor** | **ES** | **PS** | **NS** | **RS** |**RM** | **DNS** |**DNM** |\\n| ---------- | ------ | ------ | ------ | ------ | ------ | ------- |------- |\\n| **None** | 20.86 | 17.70 | 82.43 | 79.73 | 8.83 | 61.99 |13.81 |\\n| **GRACE** | 100.00 | 31.00 | 59.78 | 29.83 | 4.00 | 60.33 |13.60 |\\n\\nThe results indicate that GRACE does not perform well on the paraphrase task in the Counterfact editing benchmark. For specificity tasks, GRACE shows some success in the Distracting Neighborhood scenario (likely due to its limited generalization capability) but still suffers from substantial overfitting in the Relation task. We acknowledge the importance of further exploring methods like ICE and plan to include them in future investigations.\\n\\nAdditionally, to provide a more comprehensive illustration of specificity failure, we have added the scores for MEMIT and PMET to Table 1.\"}", "{\"metareview\": \"Previous knowledge editing approaches can negatively impact the model, especially when the edited knowledge or related content reappears in the context. This paper aims to shed light on this phenomenon, exploring its causes and proposing a method to prevent or reduce this overcompensation in the edited model. To investigate the decline in specificity performance of an edited model, the authors develop two metrics and demonstrate that even a single updated fact can cause a specificity error. An analysis of these errors reveals that they are primarily driven by attention activations\\u2014specifically, the attention module overfocuses on the edited information (attention drift), leading to incorrect predictions. To address this issue, the authors introduce Selective Attention Drift Restriction (SADR) as a method to mitigate false focus. All reviewers agree that this paper makes a clear contribution to the field. It is recommended that the authors carefully revise the paper according to the reviewers' suggestions.\", \"additional_comments_on_reviewer_discussion\": \"All reviewers believe that the paper makes a clear contribution.\"}", "{\"summary\": \"The paper proposes the Selective Attention Drift Restriction (SADR) method to address the issue of specificity failure in knowledge editing for LLMs. This failure occurs when models, after being edited to modify specific factual knowledge, disproportionately focus on the edited entity, leading to incorrect outputs in related contexts. SADR introduces a regularization term during knowledge editing to restrict excessive attention on the edited knowledge. The method is evaluated on five language models and shows improvements in mitigating specificity failures without significantly affecting edit success rates.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The paper addresses a critical issue in knowledge editing for LLMs, focusing on the problem of specificity failure, which is essential for ensuring model stability after modifications. The proposed SADR method offers a novel extension to existing techniques by dynamically constraining attention heads to prevent over-attention on edited entities, effectively improving specificity. The method is thoroughly evaluated across multiple models and tasks, showing significant improvements in mitigating attention drift while maintaining high edit success rates. Additionally, SADR is versatile and adaptable to various knowledge editing approaches and model architectures, enhancing its applicability in diverse editing scenarios.\", \"weaknesses\": \"1. The methods section is overly concise, i.e., Section 4 does not provide a thorough explanation of SADR. For example, why is KL divergence used to constrain the two attention weights in Eq. 2? Is there a theoretical basis or any prior work that can be referenced?\\n\\n2. While the SADR method shows significant improvements on the Relation and Distract Neighborhood tasks, the performance drop on generalization metrics suggests that the method struggles to balance specificity and generalization. Table 4 shows a general decline in generalization, especially for PM, which dropped by as much as 20 points. Can sacrificing generalization to improve specificity really be considered effectiveness?\\n\\n3. In Table 6, the max difference with or without head selection is less than 1.5 points (some difference is less than 0.5 points). Could this be due to random fluctuations? Could you provide a significance testing to demonstrate the effectiveness of head selection? Additionally, what would the performance be if a head were selected at random?\\n\\n4. There is a lack of efficiency analysis. Does using SADR increase computational load, memory usage, or runtime?\", \"questions\": \"See Weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Submission11041 Authors\", \"comment\": \"Thank you for your response! My doubts have been basically resolved! Finally, perhaps the authors can further validate the results with a pearson coefficient between attention drift and editing difficulty, just as you did before.\"}", "{\"title\": \"Response to Reviewer RScL(Part I)\", \"comment\": \"Thank you for your valuable feedback and constructive comments. We provide the following responses to address your concerns:\\n\\n> - W1: Performance drop in the generalization.\\n\\nFirst, we would like to clarify that our method is designed to constrain the model's excessive focus when the attention mechanism over-focuses on the subject compared to its normal levels, rather than to make the model directly focus less on the subject. In Equation (2), $H_l(S_j)$ explicitly defines that the constraint is applied only when the attention exceeds the maximum attention of the original model.\\n\\nAn ideal knowledge editing process should not drastically alter the Transformer\\u2019s mechanism for knowledge extraction (e.g., the norm of attention weights on specific words). Changing the location of the Eiffel Tower, for instance, should not lead the model to over-focus on the Eiffel Tower while neglecting other contexts (given that attention weights sum to 1). For this reason, imposing restrictions only when the attention weights surpass their normal levels is both necessary and beneficial, as this deviation can cause side effects like specificity failure (discussed in Section 3).\\n\\nFor the slight drop in the generalization metric caused by the SADR method (less than 3%), it is important to note that prior evaluation systems often ignored specificity failure, achieving near 100% generalization at the cost of severe overfitting. What we aim to accomplish is to ensure that the knowledge retrieval process after editing remains as safe and close to the original Transformer\\u2019s knowledge extraction mechanism as possible. We also believe a stable knowledge editing method is more critical than achieving near 100% generalization accuracy.\"}", "{\"title\": \"Response to Reviewer DhFe(Part II)\", \"comment\": \"> - Q3: Restricting attention weights of only one or a few layers?\\n\\nIn our early experiments, we have already evaluated the impact of restricting attention weights in different layers. In fact, the `high_attn_range` argument in the argparse configuration of our submitted code is designed to control which layers' attention weights are constrained. Our findings indicate that restricting only a subset of layers does not yield better results. This is primarily because over-attention occurs across different layers. Therefore, our approach identifies the attention heads exhibiting over-attention across all layers and applies constraints specifically to those heads.\"}", "{\"summary\": \"This work focuses on addressing the issue of over-attention during knowledge editing in large language models (LLMs). Knowledge editing techniques were developed to correct LLM's error by precisely modifying a small portion of the model's parameters. However, these methods can lead to Specificity Failure, where the model's existing knowledge and capabilities degrade post-editing. From the analysis in the paper, this phenomenon is attributed to Attention Drift, where attention heads excessively focus on edited entities. The authors propose Selective Attention Drift Restriction (SADR), which adds a regularization term to prevent undue shifts in attention during the editing process. Experiments show that SADR effectively mitigates Specificity Failure while maintaining or improving performance metrics like fluency and reliability across multiple LLMs.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The specificity is an important problem of the knowledge editing and the proposed method can effectively alleviate this problem.\\n2. The authors consider the specificity problem comprehensively and conduct a thorough evaluation of SADR against existing methods and models, providing a comprehensive analysis of its performance.\", \"weaknesses\": \"1. From the experiment results, the proposed method leads to a performance drop in the generalization, which is actually an important metric in knowledge editing. In my view, this drop may be caused by the attention-learning method as it would make the model focus less on the subject in other contexts. This drawback would deteriorate the contribution of the method.\\n2. Although the proposed method demonstrates good performance under the specificity metric, I'm not that convinced by the analysis and conclusion of the reason via the attention head. The attention head may be one reason it focuses more on the subject. However, as the editing is conducted at the MLP in some methods, it may also be the editing vector that influences the specificity.\\nThis can be seen from recent work that the edit vector's direction[1,2], space[1], and norm[2,3] would influence the specificity. For example, if we constrain the updated W, the information flow may not be dominated by huge logits. \\nSome works are contemporary work and I don't require the experiment results, but a proper analysis would encourage me to raise my score. \\n3. About the decoding constraints, can you provide a comparison between the attention-based and decoding-based constraint[4] methods here?\\n\\n[1] AlphaEdit: Null-Space Constrained Knowledge Editing for Language Models\\n\\n[2] Knowledge Circuits in Pretrained Transformers\\n\\n[3] Perturbation-Restrained Sequential Model Editing\\n\\n[4] Decoding by Contrasting Knowledge: Enhancing LLMs\\u2019 Confidence on Edited Facts.\", \"questions\": \"See weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Authors\", \"comment\": \"Thank you for the additional experiments and explanations. I've raised my score.\"}", "{\"summary\": \"The use of LLMs in real-world scenarios and applications creates the need for procedures to correct and update the knowledge in these models. The aim here is to change the model's knowledge without costly retraining in order to prevent hallucinations or correct obsolete facts without diminishing the model's performance.\\nRecently, the research field of knowledge editing has emerged, in which various techniques such as fine tuning, in-context editing, memory-based and locate-then-edit methods have already been proposed. The disadvantage of these methods is that they can negatively influence the model, especially if information of the edited knowledge triple or related content appears in the context. The study in this paper has set itself the task of shedding more light on this phenomenon, investigating its cause and proposing a method to prevent or mitigate this overcompensation of the edited model. In order to investigate the deteriorating specificity performance of an edited model, the authors develop two metrics and show that even a single updated fact can lead to a so-called specificity error.\\nAn examination of these errors leads to the realization that they are mainly caused by attention activations, the attention module places too much focus on the edited information (attention drift) and ultimately predicts an incorrect token. Consequently, the authors propose selective attention drift restriction (SADR) as a method to mitigate this false focus.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"The paper impresses with its consistently comprehensible and stringent argumentation. The authors start with a problem of a current methodology, prove that this problem exists, identify the underlying significant cause and can thus propose a solution method for the problem. The paper is comprehensibly written and error-free throughout, the illustrations and tables are helpful and well chosen. An additional plus is the ablation study, which deals with the trade-off between editing success and specificity.\", \"weaknesses\": \"A look at the appendix shows that the experiments for this article were much more extensive than stated in the actual paper. in addition to further details and results of the experiments described, further results for additional editing methods (WISE, MEND) and additional data sets can be found here. A human evaluation is also attached. It is a pity that even the section on limitations and future work did not find space in the main text. A minor weakness of the paper could be that it is not made clearer why the experiments are limited to locate-then-edit methods, although it is emphasized that the specificity error also occurs with meta-learning and parameter-preserving methods.\", \"typo_line_47\": \"Paris\", \"questions\": \"\\u2022\\tIt is mentioned that there are specificity errors for models of all types. Have parameter preserving or meta-learning methods also been investigated? It might be interesting to know the RS/RM and DNS/DNM scores for methods like GRACE or ICE.\\n\\u2022\\tI would suggest adding at least the scores for MEMIT and PMET to Table 1\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to the Rebuttal\", \"comment\": \"Thanks for the author's response, and they have dealt with some of my concerns, but I'm still not convinced of the generalization failure and I think this is the limitation.\\nAnyway, the author's response makes sense; and it is a good work.\\nI raised my score.\"}", "{\"title\": \"Response to Reviewer DhFe\", \"comment\": \"Thank you for your suggestion! We have calculated the Pearson coefficient between ROME's attention drift and the difficulty of knowledge editing (measured by $(1 - P(o_{edit}))$ on ROME-AWD), which is 0.748 with a p-value of 7e-6. This result further supports the view that for hard-to-edit knowledge, the model relies on adjusting attention weights to encode the knowledge, which leads to attention drift and, consequently, specificity failure.\\n\\nThis result has also been added to Appendix D.4.\"}", "{\"title\": \"Response to Reviewer RScL(Part II)\", \"comment\": \"> - W2: Analysis on the reason for sepecificity failure.\\n\\nThis is indeed a very meaningful and important question for our work. First, we would like to emphasize that **the specificity problem we investigate in this paper refers to cases where the model\\u2019s ability is negatively affected when content related to the edited knowledge appears in the context** (as noted in lines 44\\u201346).\\n\\nIndeed, the edit vector's direction, space, and norm [1,2,3] can influence the model's specificity performance. However, the referenced works primarily focus on preserving general knowledge and capabilities, rather than addressing the specificity failure that arises when the edited subject appears in the context. To explore the relevance of these factors to the specificity failure problem studied in our work, we conducted a correlation analysis. Specifically, we compared four factors\\u2014**attention drift**, **hidden state norm post-editing**, **L2 distance between hidden states pre- and post-editing**, and the **cosine similarity of hidden states pre- and post-editing**\\u2014with the probability of $P(o_{\\\\text{edit}})$ in specificity tasks.\\n\\n| **Factor** | **Pearson Coefficient (Distracting Neighborhood Task)** | **Pearson Coefficient (Relation Task)** |\\n|-----------------------------|--------------------------------------------------------|----------------------------------------|\\n| **Attention Drift** | **0.49** | **0.62** |\\n| **Hidden State Norm** | 0.01 | 0.31 |\\n| **L2 Distance (Hidden States)** | 0.01 | 0.31 |\\n| **Cosine Similarity (Hidden States)** | 0.02 | -0.15 |\\n\\nThe results show that, compared to the direction or norm of the edit vector, **attention drift has a more direct and significant impact on specificity failure.** We have also included this experiment in Appendix D.3 of the revised version.\\n\\n**Intuitively,** the attention mechanism is likely a key factor contributing to specificity failure. Previous studies have shown that when language models recall factual associations, the attention mechanism extracts answers from the hidden states of the subject[4,5]. Editing methods primarily modify the hidden states of the edited subject, which then influence the final output through the attention mechanism. In traditional editing methods (e.g., ROME discussed in Section 2.2), the optimization objective explicitly trains the model to predict the new $o_{\\\\text{edit}}$ given $(s, r)$. **This may create a shortcut, where the subject\\u2019s hidden state is shaped in a way that makes it prone to being overly prioritized by the attention mechanism.** Consequently, whenever the edited subject appears, the model disproportionately outputs $o_{\\\\text{edit}}$, satisfying the optimization objective while inadvertently causing specificity failure.\\n\\n**Experimentally,** as demonstrated in Section 3.3, we show that attention weights are a necessary condition for specificity failure. By **replacing only** **the post-editing attention weights with the pre-editing attention weights**\\u2014while keeping all other components unchanged (e.g., MLP outputs)\\u2014we observe a significant reduction in the probability of incorrect answers and a corresponding increase in the probability of correct ones during specificity tasks. This result strongly suggests that attention drift is **a primary driver and a necessary cause** of specificity failure.\\n\\n\\n\\n$Ref:$\\n\\n[1] AlphaEdit: Null-Space Constrained Knowledge Editing for Language Models\\n\\n[2] Knowledge Circuits in Pretrained Transformers\\n\\n[3] Perturbation-Restrained Sequential Model Editing\\n\\n[4] Dissecting Recall of Factual Associations in Auto-Regressive Language Models\\n\\n[5] Locating and Editing Factual Associations in GPT\"}", "{\"title\": \"Response to Reviewer pTRJ(Part II)\", \"comment\": \"> - W4: Lack of efficiency analysis\\n\\nIn terms of memory usage, the additional variables to store in our method are the attention weights across all layers. These weights can be represented as $L \\\\times H \\\\times S^2$, where $L$ is the number of layers in the model, $H$ is the number of attention heads, and $S$ is the sequence length. The additional storage required is minimal compared to the overall model parameters. During our experiments, we did not observe any noticeable increase in GPU memory usage.\\n\\nRegarding runtime, our method primarily involves computing a mask through comparison of attention weights and calculating the KL divergence. However, due to the use of Python loops in our current implementation, a slight runtime overhead is observed. For instance, when applying the ROME editing method to GPT-J-6B on an A100-PCIE-40GB GPU, the runtime per edit increased from 7.80 seconds (without SADR) to 9.65 seconds (with SADR).\\n\\nIn the revised version, we have included the efficiency analysis in Appendix F.\"}", "{\"title\": \"Response to Reviewer DhFe(Part I)\", \"comment\": \"Thanks for your constructive feedback on our paper. Our response to your questions is as follows:\\n\\n> - W1: Can SADR effectively solve specificity failure on multi-hop reasoning tasks?\\n\\nWe appreciate the suggestion to conduct additional experiments on multi-hop reasoning tasks. We conduct experiments on MQuake, and the results (Multi-hop reasoning score, denoted as MS) are as follows:\\n\\n| Editor | None | ROME | +Ours |\\n|--------|------|------|-------|\\n| ES | 17.35 (1.7) | 99.35 (0.4) | 99.60 (0.3) |\\n| MS | 26.90 (1.1) | 16.28 (0.9) | 19.90 (1.0) |\\n\\nOur method indeed alleviates specificity failure in multi-hop reasoning to some extent. However, after analyzing the MQuake dataset, we find that most samples exhibit the following pattern: New fact: <baseball, created in, Japan>. Multi-hop question: *Which political leader governs the country of origin of Mike Krukow's sport?* In such cases, the edited subject is not directly mentioned in the question, limiting the impact of over-attention on failure cases. Instead, most failures arise from the inability to fully incorporate the new facts into the model\\u2019s knowledge, which is a problem of generalization, as highlighted in Section 4.2 in MQuake.\\n\\nAdditionally, we find a recent research[1] (released after the ICLR submission deadline) provides a multi-hop dataset where the subject is explicitly mentioned. This work attributes the failure of multi-hop questions to the edited model's over-attention on the subject. We test our method on this dataset, and the results also show a significant improvement:\\n\\n| Editor | None | ROME | +Ours |\\n|--------|------|------|-------|\\n| ES | 11.56 (2.2) | 100.00 (0.0) | 100.00 (0.0) |\\n| MS | 91.61 (1.9) | 59.12 (3.4) | 76.76 (2.9) |\\n\\nThese results further validate the efficacy of our approach in mitigating specificity failure under multi-hop reasoning scenarios where over-attention on the subject plays a critical role.\\n\\n[1] Zhang M, Ye X, Liu Q, et al. Uncovering Overfitting in Large Language Model Editing[J]. arXiv preprint arXiv:2410.07819, 2024.\\n\\n> - W2,3 & Missing References\\n\\nThank you for pointing out the typos and reminding us of the missing references. In the revised version, we have corrected these issues.\\n\\n> - Q1: The reasons that lead to attention drift.\\n\\nThis is a good question. In traditional editing methods (e.g., ROME discussed in Section 2.2), the optimization objective explicitly trains the model to predict the new $o_{\\\\text{edit}}$ given $(s, r)$. This process can unintentionally shape the subject\\u2019s hidden state in a way that makes it disproportionately prioritized by the attention mechanism, creating a shortcut. As a result, whenever the edited subject appears, the model overemphasizes $ o_{\\\\text{edit}}$, achieving the optimization goal but causing specificity failure.\\n\\nTo further investigate whether factors such as the **norm of the hidden layer vector** or the **distance between hidden state vectors pre- and post-editing** contribute to attention drift, we conducted a correlation analysis. Specifically, we examined the relationships between **hidden state norm post-editing**, **L2 distance between hidden states pre- and post-editing**, and **cosine similarity of hidden states pre- and post-editing** with attention drift. The results are as follows:\\n\\n| Factor | Pearson Coefficient |\\n|-----------------------------------|---------------------------------------|\\n| Hidden State Norm | -0.1491 |\\n| L2 Distance (Hidden states) | -0.1484 |\\n| Cosine Similarity (Hidden states)| -0.0483 |\\n\\nThe results suggest that the shift or norm of the hidden state vector is weakly correlated with attention drift. In fact, the implemention of ROME already constrains the shift in hidden state vectors during optimization by introducing the `clamp_norm_factor`. The primary cause of attention drift likely lies in the optimization objective, which hard-codes the knowledge into the model\\u2019s forward propagation rather than enabling a more natural and reasonable assimilation of new knowledge.\\n\\n> -Q2: Additional runtime overhead of SADR.\\n\\nRegarding runtime, our method only involves comparing attention weights to create a mask and calculating the KL divergence. However, due to the use of Python loops in our implementation, there is a slight increase in runtime. For instance, on an A100-PCIE-40GB GPU, when applying the ROME editing method on GPT-J-6B, the time required for each edit with and without SADR was 9.65 seconds and 7.80 seconds, respectively. In the revised version, we have included the efficiency analysis in Appendix F.\"}" ] }
4ktJJBvvUd
Multi-objective antibody design with constrained preference optimization
[ "Milong Ren", "ZaiKai He", "Haicang Zhang" ]
Antibody design is crucial for developing therapies against diseases such as cancer and viral infections. Recent deep generative models have significantly advanced computational antibody design, particularly in enhancing binding affinity to target antigens. However, beyond binding affinity, antibodies should exhibit other favorable biophysical properties such as non-antigen binding specificity and low self-association, which are important for antibody developability and clinical safety. To address this challenge, we propose AbNovo, a framework that leverages constrained preference optimization for multi-objective antibody design. First, we pre-train an antigen-conditioned generative model for antibody structure and sequence co-design. Then, we fine-tune the model using binding affinity as a reward while enforcing explicit constraints on other biophysical properties. Specifically, we model the physical binding energy with continuous rewards rather than pairwise preferences and explore a primal-and-dual approach for constrained optimization. Additionally, we incorporate a structure-aware protein language model to mitigate the issue of limited training data. Evaluated on independent test sets, AbNovo outperforms existing methods in metrics of binding affinity such as Rosetta binding energy and evolutionary plausibility, as well as in metrics for other biophysical properties like stability and specificity.
[ "antibody design", "diffusion generative model", "preference optimization" ]
Accept (Poster)
https://openreview.net/pdf?id=4ktJJBvvUd
https://openreview.net/forum?id=4ktJJBvvUd
ICLR.cc/2025/Conference
2025
{ "note_id": [ "ztEFex55NF", "ud4T6DUVoU", "t6jgUR1SXk", "klJzxuqtl7", "k5hMvMt4qH", "jwo5DGt91t", "g9Cq1sdoAY", "cGkZrsGqA6", "bxuEGDNBaa", "QwbFFy7OVU", "OoZw5J6FaF", "MtZEIZqhuy", "I7R5dk3vAD", "FCFPAXcPlG", "EzHcTXP3bt", "EwWbunUVQv", "9PUOMWbtNh", "4tBfqsfx7y", "0H1ltgamLr" ], "note_type": [ "official_comment", "decision", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732451635720, 1737524013143, 1732428995641, 1734738801075, 1732196643230, 1732196832986, 1732499130811, 1732479327958, 1732196760066, 1730712723224, 1730197364684, 1732195838625, 1732433848719, 1730292118098, 1732196859155, 1732196698200, 1732195796843, 1732196611895, 1732373875513 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission9907/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission9907/Authors" ], [ "ICLR.cc/2025/Conference/Submission9907/Area_Chair_wr12" ], [ "ICLR.cc/2025/Conference/Submission9907/Authors" ], [ "ICLR.cc/2025/Conference/Submission9907/Authors" ], [ "ICLR.cc/2025/Conference/Submission9907/Authors" ], [ "ICLR.cc/2025/Conference/Submission9907/Reviewer_fXjU" ], [ "ICLR.cc/2025/Conference/Submission9907/Authors" ], [ "ICLR.cc/2025/Conference/Submission9907/Reviewer_neD7" ], [ "ICLR.cc/2025/Conference/Submission9907/Reviewer_fXjU" ], [ "ICLR.cc/2025/Conference/Submission9907/Authors" ], [ "ICLR.cc/2025/Conference/Submission9907/Reviewer_vnHk" ], [ "ICLR.cc/2025/Conference/Submission9907/Reviewer_vnHk" ], [ "ICLR.cc/2025/Conference/Submission9907/Authors" ], [ "ICLR.cc/2025/Conference/Submission9907/Authors" ], [ "ICLR.cc/2025/Conference/Submission9907/Authors" ], [ "ICLR.cc/2025/Conference/Submission9907/Authors" ], [ "ICLR.cc/2025/Conference/Submission9907/Reviewer_neD7" ] ], "structured_content_str": [ "{\"comment\": \"We sincerely appreciate the reviewer's constructive comments again, which made the quality of our manuscript improved greatly.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"comment\": \"We sincerely appreciate the reviewer's constructive comments again, which made the quality of our manuscript improved.\"}", "{\"metareview\": \"This paper introduces a multi-objective antibody design method, AbNovo. The authors train an antigen conditioned generative model for antibodies, and then fine tune to maximize binding affinity subject to constraints on biophysical properties like high stability and low self-association. The authors achieve stronger performance on the RAbD in silico antibody design test set than many recent methods in this area. Although the authors' method seems to differ from AbDPO primarily in the introduction of constraints into the preference optimization, the final empirical results are quite strong relative to prior work.\", \"additional_comments_on_reviewer_discussion\": \"The authors addressed with new experimental data two major concerns raised by the reviewers: (1) the relative lack of antibody optimization results in the original paper, and (2) optimization results when the antibody-antigen binding pose is not known in advance. Beyond these, the authors also cleaned up the paper substantially, with several missing technical terms and metric definitions now provided in the current draft of the paper. Looking through the draft, these updates (e.g., the inclusion of the new Appendix A.5 defining evaluation metrics, and additional text throughout the message section) appear adequate. The reviewers are in agreement, and after a score raise are unanimously in favor of acceptance.\"}", "{\"comment\": \"**Question 1: The announcement of \\\"The first deep generative model for multi-objective antibody design\\\" in summarized contributions, AbDPO also supports multi-objective optimization.**\\n\\nThank you for your suggestion. We have removed the word \\\"first\\\" from that sentence (Line 71). \\n\\n\\n\\n**Question 2: Question about energy minimization.**\\n\\nIn our initial manuscript, following the post-processing procedures for designed antibodies in previous method [1, 2], we conducted the energy minimization for both backbone and side-chain structures. \\n\\nIn response to your concern, we have added a new experiment where we relaxed only side-chain atoms while keeping all backbone atoms fixed. As demonstrated in the table below, AbNovo continues to outperform other methods in this setting (row \\u201cw.o. fixed backbone\\u201d and row \\u201cfixed backbone\\u201d). \\n\\n| | dyMEAN | AbX | DiffAb | AbNovo |\\n| --- | --- | --- | --- | --- |\\n| w.o. fixed backbone | -1.7 | 4.8 | -1.0 | **-12.1** |\\n| fixed backbone | 607.5 | 457.1 | 427.3 | **89.4** |\\n| fixed backbone w.o. rep | -7.2 | -9.8 | -6.6 | **-17.9** |\\n\\nWe can also observe that the energy regarding to all methods increased significantly. This may suggest that the designed backbone atoms have steric clashes that cannot be resolved without optimizing the backbone conformation. To validate this, we recalculated the Rosetta energy by removing the *atomic repulsion energy* term related to steric clashes. The energy scores drop substantially, suggesting that steric clashes in the fixed backbone contribute to the high energies observed (the third row named fixed backbone w.o. rep). \\n\\nRegarding your question about whether these experiments demonstrate that AbNovo simply provides a better initial structure for Rosetta relaxation, we also computed the backbone structural deviations between pre- and post-relaxation in the setting where both backbone and side chain are relaxed. As demonstrated in the table below, since the backbone structures change very little during relaxation (less than 0.1 \\u00c5), this indicates that AbNovo inherently generates high-quality backbone conformations that do not heavily rely on relaxation for improvement.\\n\\n| | dyMEAN | AbX | DiffAb | AbNovo |\\n| --- | --- | --- | --- | --- |\\n| RMSD between relaxed and unrelaxed | 0.07 | 0.10 | 0.09 | 0.07 |\\n\\n[1] Luo, et al. [Antigen-Specific Antibody Design and Optimization with Diffusion-Based Generative Models for Protein Structures.](https://openreview.net/pdf?id=jSorGn2Tjg) NeurIPS 2022. \\n\\n[2] Zhu, et al. [Antibody Design Using a Score-based Diffusion Model Guided by Evolutionary.](https://openreview.net/pdf?id=1YsQI04KaN) ICML 2024. \\n\\n**Question 3: Does the optimization of these physical properties contribute to some chemical validity? For example, does the peptide bond length get closer to the actual length?**\\n\\nWe found that through optimization of these physical properties, although steric clashes cannot be completely avoided, they can be further minimized to improve chemical validity. \\nHere, we evaluate the chemical validate with the mean absolute error (MAE) of peptide bond length between native antibodies and designed antibodies. When comparing our base model and the preference-optimized model, we found that after preference optimization, the peptide bond length became closer to the actual length.\\nMoreover, AbNovo outperformed other comparative methods in terms of peptide bond length.\\n\\n| | dyMEAN | DiffAb | AbX | AbNovo base | AbNovo |\\n| --- | --- | --- | --- | --- | --- |\\n| peptide bond length MAE | 0.46 | 0.56 | 0.31 | 0.35 | **0.24** |\\n\\n**Question 4: The standard deviation of the physical energy needs to be presented.**\\n\\nFollowing you suggestion, we have now computed the standard deviation of the physical energy. Specifically, for each designed antibody, we ran Rosetta 10 times and report the standard deviations in the table below. The low standard deviation across all methods indicates that this benchmarking metric is robust. \\n\\n| dyMEAN | AbX | DiffAb | AbNovo |\\n| --- | --- | --- | --- |\\n| 1.2 | 2.3 | 1.0 | **0.9** |\\n\\n**Question 5: The AAR performance is excessively high, and it's necessary to check whether the training data of the protein language model contains samples similar to the test set.**\\n\\nThe structure-aware language model used by AbNovo is trained on PDB and AFDB structural data. When training this language model, we strictly filtered out all antibody structures and sequences from the training dataset. \\n\\n**Question 6: I am curious about how many amino acids have mutated in those designed antibodies that outperform natural ones (at least in binding energy).**\\n\\nFollowing your suggestion, we analyzed the designed antibodies that outperform the natural ones in terms of binding energy. On average, 44.9% (standard deviation of 5.8%) of amino acids were mutated in the CDR H3 region, and 19.8% (standard deviation of 2.5%) of amino acids were mutated across all CDR regions.\"}", "{\"comment\": \"**Weakness 3: While the manuscript goes in great theoretical detail, intuition is often lacking. E.g. Equation 3 is introduced but an intuition, \\u201cfirst term maximizes rewards, while second term keeps the model close to the reference model.\\u201d, which could facilitate understanding for the reader is missing.**\\n\\nThis suggestion is great to enhance readability of our manuscript. We have provided intuitive explanations for the key formulas in the article. For example, \\n\\n1. We provided intuitive explanations for concepts such as the primal-dual algorithm and the dual gradient estimation (Equation 4,5). This equation can be interpreted as appending a penalty term ${J}^{({C})}$ to the original objective ${J}^{({R})}$. The penalty term, which depends on how much the antibody generated by the current model violates constraints, can be adjusted dynamically through the Lagrange multipliers. (Line 256-258). \\n2. We also provided intuitive explanations for the gradient of Lagrange multipliers (Equation 11). In this equation, the gradient of dual can be calculated by the expected degree of constraint violation in the sampled antibodies under the current policy model. (Line 332-334)\\n\\nIntuitive description for more formulas can be found in Line 244-246 and 300-302. \\n\\n**Weakness 4: Many things necessary for fully understanding the paper are moved to the appendix, resulting in decreased readability. Further, this also applies to some of the most interesting results, e.g. Table 9 and especially Figure 4.**\\n\\nFollowing your suggestions, we have moved some of the important results (Table 9) to the main text. However, due to the strict 10-page limit for the main text at the ICLR conference, we were unable to present more results in the main paper. Therefore, some results were included in the supplementary materials, with a clear link provided in the main text. We placed the important experimental results at the beginning of the Appendix.\\n\\n**Weakness 5: Some tables are hard to read, as their caption and corresponding text do not exactly describe what is in the table.**\\n\\nFollowing your suggestions, we have revised the captions and provided detailed explanations for better understanding. Specifically:\\n\\nIn Table 1, \\\"reference\\\" represents the native antibody structure and sequence in the testing set.\\n(Line 420-421)\\n\\nFor Table 3, we have explicitly clarified the meaning of \\u201cESM-2 based\\u201d and \\u201cMulti-objective.\\u201d Below is the updated caption:\\n\\nAblation studies for AbNovo on the RAbD dataset. The ablation experiment settings include: without using a language model (w.o. LM), replacing the structure-aware language model with ESM2 (ESM-2 based), using supervised fine-tuning instead of preference optimization (SFT), and incorporating all constraints into the optimization objectives (Multi-objective).\\n\\nSimilarly, we revised other unclear parts of the article to improve clarity (Line 153-156, 461-466 , 486-488, and 518-519).\\n\\n**Weakness 6: In the abstract and introduction, a focus is put on \\u201calleviate overfitting issues due to the scarcity of antibody-antigen training data\\u201d, but no analysis supporting such a claim is included.**\\n\\nTo alleviate the scarcity of antibody-antigen training data, we utilized a structure-aware language models pre-trained on massive structures beyond antibody proteins. We included two ablation experiments to demonstrate the relative contribution of the structure-aware language model. (Line 447-456)\\n\\nFirst, we trained an ablation model where we exclude the embeddings of the language model as input features. We observed significant drops across nearly all metrics, indicating the importance of the language model.\\n\\nSecond, we also trained an ablation model where we replace this structure-aware language model with a sequence-only language model (ESM2). It shows that the structure-aware model yielded better results than the purely sequence-based model.\\n\\n| | Rosetta Binding Energy | Evolutionary Plausibility | Constraints | AAR | RMSD |\\n| --- | --- | --- | --- | --- | --- |\\n| w.o. language model | 7.54 | 2.67 | 46.5% | 41.53% | 3.19 |\\n| ESM2 | 1.75 | 2.40 | 30.8% | 49.2% | 2.55 |\\n| AbNovo (base) | -2.60 | 2.41 | 22.7% | 49.9% | 2.19 |\\n\\n**Weakness 7: Question about case studies**\\n\\nWe have now utilized another more reasonable case where the DiffAb fulfill all constraints. We note that though the antibody generated by DiffAb fulfilled all constraints, AbNovo outperformed DiffAb in terms of Rosetta Binding Energy and Evolutionary Plausibility. \\n\\nWe also presented more details in the updated figure, such as the CDR H3 sequences annotated with biophysical properties.\"}", "{\"comment\": \"Once again, we sincerely appreciate the reviewer's insightful and constructive comments, which have significantly enhanced the quality of our manuscript.\"}", "{\"comment\": \"Thank you for your detailed feedback. All points were well addressed, and I appreciate the clarity of your responses. Based on this, I am increasing my score accordingly.\"}", "{\"comment\": \"Thank you for your valuable suggestions, which have significantly improved the quality of our manuscript. We have provided point-by-point responses to your comments below. We hope that our revisions and additional experiments address your concerns satisfactorily.\\n\\n**Weakness 1: I would suggest introducing a background section, as there are many things in this manuscript that would benefit from a proper introduction.**\\n\\nWe added more background introduction on both preference optimization and the diffusion-based generative model. For preference optimization, we introduce concepts on the beginning of our method (Line 159-163). We introduce more details on Equation 1 (Line 202-204), the definition of CTMC (Line 186), and $T^{(0:1)}$ (Line 191). Additionally, we explicitly describe that the time $t$ in diffusion processes is a uniform distribution of [0, 1] (Line 191-192).\\n\\nTo further enhance the readability of our manuscript, we have provided a detailed notation table for all symbols used throughout the paper, which can be found in Appendix A.1.\\n\\n**Weakness 2: The evaluation metrics remain unclear even after reading the appendix A.3. This holds especially for \\u201cEvolutionary Plausibility\\u201d, \\u201cStability\\u201d, \\u201cSelf-association\\u201d, \\u201cNon-specific Binding\\u201d.**\\n\\nWe have revised the this part to make these terms more clear (Line 1427-1497) and show how the different metrics are calculated and what they represent. We also briefly explain these terms below. \\n\\n**Evolutionary Plausibility:** \\n\\n*Evolutionary Plausibility* measures how likely a designed sequence is evolutionarily plausible in nature, reflecting adherence to general evolutionary rules of natural proteins. Recent studies show that large-scale protein language models, trained on millions of natural protein sequences, effectively capture these evolutionary rules [1, 2]. Specifically, we calculate this metric as the log-likelihood of the designed sequence under a pre-trained protein language model (Line 1457). Importantly, this approach has proven useful for guiding antibody maturation in wet-lab experiments [1], and it is thus widely used as an evaluation metric in recent generative models for antibody design [3, 4, 5].\\n\\n**Stability:**\\n\\n*Stability* measures the stability of the conformation of designed antibody in isolation, without the antigen structure involved. This metric differs from *Binding Energy*, which evaluates the interaction between the antibody and the antigen. Specifically, we calculate this metric using established protocols based on the Rosetta software, as employed in previous methods [6].\\n\\n**Self-association:**\\n\\n*Self-Association* refers to the tendency of antibody molecules to aggregate with each other. Self-association can negatively impact the efficacy of antibodies, so low self-association is desired in practical antibody development [7, 8]. Previous studies have shown that larger negatively charged patches area in the CDRs corresponds to a higher risk of self-association [7]. Specifically, we calculate this metric using established protocols from previous methods [7].\\n\\n**Non-specific Binding:**\\n\\n*Non-specific Binding*, or *Binding Specificity*, refers to the undesirable interaction of antibodies with cellular proteins other than the intended target, particularly membrane proteins of the cell. Previous studies [7] have observed a strong correlation between non-specific binding and the hydrophobic patches in CDRs. Therefore, we use the hydrophobic patch area in CDRs as a proxy for evaluating the risk of non-specific binding, utilizing their established pipeline to calculate this metric [7].\\n\\n[1] Hie, et al. [Efficient evolution of human antibodies from general protein language models.](https://www.nature.com/articles/s41587-023-01763-2) Nature Biotechnology 2024. \\n\\n[2] Shuai, et al. [IgLM: Infilling language modeling for antibody sequence design.](https://www.cell.com/cell-systems/fulltext/S2405-4712(23)00271-5) Cell System 2023.\\n\\n[3] Zhu, et al. [Antibody Design Using a Score-based Diffusion Model Guided by Evolutionary.](https://openreview.net/pdf?id=1YsQI04KaN) ICML 2024.\\n\\n[4] Kong, et al. [End-to-End Full-Atom Antibody Design](https://arxiv.org/abs/2302.00203). ICML 2023. \\n\\n[5] Zhou, et al. [Antigen-Specific Antibody Design via Direct Energy-based Preference Optimization.](https://arxiv.org/pdf/2403.16576v1) NeurIPS 2024. \\n\\n[6] Li, et al. [Full-atom peptide design based on multi-modal flow matching.](https://arxiv.org/abs/2406.00735) ICML 2024. \\n\\n[7] Makowski, et al. [Optimization of therapeutic antibodies for reduced self-association and non-specific binding via interpretable machine learning.](https://www.nature.com/articles/s41551-023-01074-6) Nature Biomedical Engineering 2023.\\n\\n[8] Yadav, et al. [The influence of charge distribution on self-association and viscosity behavior of monoclonal antibody solutions.](https://pubs.acs.org/doi/abs/10.1021/mp200566k) Molecular pharmaceutics 2012.\"}", "{\"summary\": \"This paper focuses on some important properties, such as non-antigen binding specificity and low self-association, and optimizes the model in a DPO-like manner. What differs it from other DPO-based methods lies in two forms, the optimization targets and continuous rewards. With a two stages training framework, the proposed AbNovo is capable of capturing generalized protein information and constraining the generated results with desired properties. Experiments also support the effectiveness that generated antibodies are well designed.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. Multiple objects are considered to improve the quality of generated antibodies. Although not validated in wet lab, these kinds of properties are essential.\\n2. This work does not simply integrate DPO only optimizing binding affinity, which broadens the horizons for similar works.\", \"weaknesses\": \"1. Rosetta energy is used as an alignment metric. It is well-known that forcefield energies have a weak correlation with measured binding affinity, typically around 0.3 [1,2]. This may lead to the totally wrong direction.\\n2. Limited antibody optimization experiments, which should be a major highlight of antibody design. Maybe some further experiment may alleviate this, like in [3,4].\\n\\n[1]Luo S, Su Y, Wu Z, et al. Rotamer density estimator is an unsupervised learner of the effect of mutations on protein-protein interaction[J]. bioRxiv, 2023: 2023.02. 28.530137.\\n\\n[2]Ambrosetti, F., Piallini, G., & Zhou, C. Evaluating Forcefield Energies in Protein Binding Studies. National Center for Biotechnology Information, 2020.\\n\\n[3]Kong X, Huang W, Liu Y. End-to-end full-atom antibody design[J]. arXiv preprint arXiv:2302.00203, 2023.\\n\\n[4]Shitong Luo, Yufeng Su, Xingang Peng, Sheng Wang, Jian Peng, and Jianzhu Ma. Antigen-specific\\nantibody design and optimization with diffusion-based generative models for protein structures.\\nAdvances in Neural Information Processing Systems, 35:9754\\u20139767, 2022.\", \"questions\": \"1. In the visualization part, I don't see why results come from dyMEAN and DiffAb do not satisfy constraints like Stability, Self-association. Can you explain this in detail?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"In this manuscript, the authors present AbNovo, a method combining constrained preference optimization with generative models for multi-objective antibody design. First, an antigen-conditioned generative model is trained to co-design antibody structure and sequence. Then this model is fine-tuned to maximize binding affinity to a target antigen while enforcing constraints on properties such as non-specific Binding, Self-association, and Stability. In their experiments, the authors compare their method to many recent works and show an improved performance.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"\\u2022\\tThe authors provide many theoretical derivations and analysis.\\n \\n\\u2022\\tThe authors include many baselines for their experiments which shows the good performance of their proposed method.\", \"weaknesses\": \"\\u2022\\tI would suggest introducing a background section, as there are many things in this manuscript that would benefit from a proper introduction.\\n \\n \\t\\u2218\\tbase model, reference model, policy model could be introduced, e.g. with an intuition. These are introduced in Figure 1, but do not come with a description on how they are related. Only in Algorithm 1, the reader is shown that those are updated iterations of the very same model.\\n\\n \\t\\u2218\\tdelta and G in Equation 1 is never introduced, but instead taken from Campbell et al.\\n\\n \\t\\u2218\\tCTMC - Continuous Time Markov Chain is never defined.\\n\\n \\t\\u2218\\tThe notion of time t in diffusion processes used in the manuscript, t in U([0, 1]) is based on the CTMC definition by Campbell et al. but differs to that used on many other publications, e.g. [1] J. Ho, A. Jain, and P. Abbeel, \\u201cDenoising Diffusion Probabilistic Models\\u201d, [2] J. Sohl-Dickstein, E. Weiss, N. Maheswaranathan, and S. Ganguli, \\u201cDeep Unsupervised Learning using Nonequilibrium Thermodynamics,\\u201d. Thus, I would recommend introducing it e.g. as being in [0, 1] in line 185. Instead, this is first done in Equation 9.\\n\\n \\t\\u2218\\tT^(0:1) as a diffusion path is first defined in line 284 even tough being used many times before.\\n\\n\\n\\u2022\\tThe evaluation metrics remain unclear even after reading the appendix A.3. This holds especially for \\u201cEvolutionary Plausibility\\u201d, \\u201cStability\\u201d, \\u201cSelf-association\\u201d, \\u201cNon-specific Binding\\u201d.\\n \\n\\u2022\\tWhile the manuscript goes in great theoretical detail, intuition is often lacking. E.g. Equation 3 is introduced but an intuition, \\u201cfirst term maximizes rewards, while second term keeps the model close to the reference model.\\u201d, which could facilitate understanding for the reader is missing. \\n \\n\\u2022\\tMany things necessary for fully understanding the paper are moved to the appendix, resulting in decreased readability. Further, this also applies to some of the most interesting results, e.g. Table 9 and especially Figure 4. \\n \\n\\u2022\\tSome tables are hard to read, as their caption and corresponding text do not exactly describe what is in the table. E.g. \\n \\n \\t\\u2218\\tIt is unclear what \\u201creference\\u201c in Table 1 describes.\\n \\t\\u2218\\tIn Table 3 the reader must guess that \\u201cESM-2 based\\u201d refers to \\u201cutilizing different language models\\u201d from the text and \\u201cMulti-objective\\u201d refers to \\u201cwe incorporated all constraints into the optimization objective\\u201d.\\n\\n\\u2022\\tIn the abstract and introduction, a focus is put on \\u201calleviate overfitting issues due to the scarcity of antibody-antigen training data\\u201d, but no analysis supporting such a claim is included. \\n \\n\\u2022\\tThe analysis of the \\u201cimpact of utilizing different language models in training the antibody design model\\u201d is very short and not well described. \\n \\n\\u2022\\tFigure 4 is a very interesting figure which summarizes the capabilities of DiffAb, AbX, and AbNovo very well and highlights that AbNovo \\u201cperforms best\\u201c. In there, we also observe that only a single antibody generated by DiffAb against 5NUZ does violate constraints. Therefore, it seems inadequate that the visualized antibody for DiffAb in Figure 2 is a sample which does not fulfill all constraints. Furthermore, the DiffAb sample with \\u201cRosetta binding energy: -2.12, Evolutionary Plausibility: 2.60\\u201d violating constraints cannot be found in Figure 4. \\n \\n\\u2022\\tSome claims appear exaggerated:\\n \\n \\t\\u2218\\t\\u201cthe first deep generative model for multi-objective antibody design, which explicitly optimizes multiple biophysical properties crucial for real-world antibody development.\\u201d There have been previous works which analyze the multi-objective setting for generating antibodies, e.g. \\u201cPareto Front Training For Multi-Objective Symbolic Optimization\\\" by Faris et al. which train a algorithm to optimize a pareto front of sequences regarding the objectives antibody binding quality, stability, and humanness. Perhaps the claim can be weakened or reformulated?\\n\\n \\t\\u2218\\tAbNovo is \\u201cbridging the gap between in silico design and practical application.\\u201d seems a bit too strong given that no practical application is contained.\\n\\n\\u2022\\tTypo \\u201cBolocks\\u201d in Figure 3 \\n \\nIn summary, I think this manuscript offers valuable new ideas but suffers from not being self-contained, sub-optimal readabilities and depth of analysis. I hope these issues can be addressed in the rebuttal and would love to increase my score in response.\", \"questions\": \"\\u2022\\tin Section 4.2 you state that when \\u201cwe incorporated all constraints into the optimization objective by taking a weighted average\\u201d a \\u201cdrop in performance\\u201d is observable. However, the corresponding results show an improvement wrt. the \\u201cAll Constraints\\u201d metric. Could you elaborate on that?\\n \\n\\u2022\\tIn Table 2, we can observe that AbNovo (base) sometimes exhibits favorable scores than AbNovo. Is there a tradeoff between fulfilling constraints and achieved AAR/RMSD? \\n \\n\\u2022\\tIs there a reason dyMEAN is not included in Figure 4 and AbX not in Figure 2 respectively?\\n\\n-------\", \"post_rebuttal\": \"All points were well addressed. Based on this, I am increasing my score accordingly.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"**Weakness 2: Limited antibody optimization experiments, which should be a major highlight of antibody design. Maybe some further experiment may alleviate this, like in [3,4].**\\n\\nFollowing your suggestion, we have added new experiments on antibody optimization. Specifically, we adopt the protocol and methods used in DiffAb for antibody optimization and we show the performance of different methods under varying optimization steps. This process involves perturbing the CDR sequence and structure at time $t$ using forward diffusion, then denoising from the time $t$ to time $0$ in reverse diffusion to generate 128 antibodies for each antigen.\\n\\nAs shown in the following table, AbNovo has better performance in Rosetta Binding Energy, Evolutionary Plausibility, AAR, RMSD, and the Proportion of constraint satisfaction.\\n\\nWe have added these new results to our manuscript (Appendix Table 6, Table 7, and Table 8) (from line 838 to line 873). \\n\\n| Optimization steps | DiffAb(Rosetta Binding Energy /Evolutionary Plausibility | AbX (Rosetta Binding Energy /Evolutionary Plausibility) | AbNovo (Rosetta Binding Energy /Evolutionary Plausibility) |\\n| --- | --- | --- | --- |\\n| 4 | -10.45/2.39 | -8.80/2.40 | **-21.02/2.39** |\\n| 8 | -8.52/2.41 | -2.64/2.43 | **-19.77/2.37** |\\n| 16 | -7.18/2.42 | 2.07/2.42 | **-12.70/2.37** |\\n| 32 | -6.50/2.53 | -3.50/2.44 | **-15.35/2.36** |\\n| 64 | 0.23/2.57 | 3.98/2.44 | **-12.87/2.36** |\\n| 100 | -0.96/2.60 | 4.79/2.44 | **-12.05/2.36** |\\n\\n| Optimization steps | DiffAb (Constraints) | AbX (Constraints) | AbNovo (Constraints) |\\n| --- | --- | --- | --- |\\n| 4 | 13.2% | 14.0% | **12.8%** |\\n| 8 | 13.9% | 22.7% | **7.1%** |\\n| 16 | 13.6% | 22.5% | **6.5%** |\\n| 32 | 15.7% | 21.9% | **4.2%** |\\n| 64 | 21.5% | 23.0% | **3.6%** |\\n| 100 | 20.8% | 23.5% | **3.9%** |\\n\\n| Optimization steps | DiffAb (AAR / RMSD) | AbX (AAR / RMSD) | AbNovo (AAR / RMSD) |\\n| --- | --- | --- | --- |\\n| 4 | **0.88** / 1.09 | 0.80 / 0.97 | 0.85 **/ 0.80** |\\n| 8 | **0.76** / 1.59 | 0.59 / 1.51 | 0.69 / **1.34** |\\n| 16 | 0.48 / 1.78 | 0.49 / 1.54 | **0.51 / 1.46** |\\n| 32 | 0.39 / 2.05 | 0.45 / 1.88 | **0.50 / 1.66** |\\n| 64 | 0.30 / 2.69 | 0.45 / 2.33 | **0.48 / 2.03** |\\n| 100 | 0.28 / 2.86 | 0.44 / 2.50 | **0.49 / 2.38** |\\n\\n\\n**Queation 1: In the visualization part, I don't see why results come from dyMEAN and DiffAb do not satisfy constraints like Stability, Self-association. Can you explain this in detail?**\\n\\nFollowing your suggestion, we have revised Figure3 to demonstrate more details. Specifically, we presented the sequence on CDR H3 region annotated with biochemical properties. We explain the details for this case regarding Stability and Self-Association as follows:\\n\\n**Stability:** \\n\\n*Stability* measures the stability of the conformation of designed antibody in isolation, without the antigen structure involved. Following the protocol in previous method, we compute the metric of Stability using Rosetta Software. \\nWe observed that the structures generated by dyMEAN exhibit numerous steric clashes (indicated by dashed lines in the structure), which will break down the energy term of van der Waals force in Rosetta Energy. \\n\\n**Self-association**\\n\\n*Self-Association* refers to the tendency of antibody molecules to aggregate with each other. \\nPrevious studies have shown that a larger area of negatively charged patches in the CDRs corresponds to a higher risk of self-association in wet-lab experiments [1]. We see that dyMEAN produce a large number of charged amino acids which can lead to potential risks of self-association.\\n\\nPlease note that the case of DiffAb used in this version of the manuscript differs from the previous version, following the suggestion of reviewer fXjU. In this updated evaluation, the antibodies designed by DiffAb satisfy the Stability and Self-association constraints. However, they exhibit poor performance in other critical metrics, including Rosetta binding energy, Evolutionary Plausibility, AAR, and RMSD.\\n\\n[1] Makowski, et al. [Optimization of therapeutic antibodies for reduced self-association and non-specific binding via interpretable machine learning](https://www.nature.com/articles/s41551-023-01074-6). Nature Biomedical Engineering 2023. \\n\\n\\nThank you for your constructive feedback on our paper. If you have any further concerns, please feel free to discuss them with us. We look forward to your response.\"}", "{\"comment\": \"I\\u2019m satisfied with the response and has increased my score to 6. Well done\"}", "{\"summary\": \"This paper presents an antibody design method, AbNovo, achieved antibody through multi-objective optimization. By introducing a structure-aware protein language model and employing constrained preference optimization with continuous rewards, AbNovo surpasses previous methods in both reference-based metrics and reference-free metrics (i.e., biophysical properties).\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"Achieved performance on physical metrics that significantly surpasses other methods.\\n\\nIntroduced a structure-aware protein language model and demonstrated its usefulness for antibody design.\\n\\nProvided rigorous theoretical derivation\", \"weaknesses\": \"Seems to be an updated version of AbDPO, somewhat heavier but showing better performance.\\n\\nThe task setting is overly simplistic. Although the structure of the antibody's FR region is relatively conserved and can be considered known, the binding pose between the antibody and antigen is typically unknown. However, given that the main goal of this work is to propose a new method for antibody optimization, this limitation is understandable.\", \"questions\": \"1. The announcement of \\\"The first deep generative model for multi-objective antibody design\\\" in summarized contributions, AbDPO also supports multi-objective optimization.\\n\\n2. In energy evaluation, if you want to assess the energy performance of the designed backbone, energy minimization is necessary for the side chains while keeping the backbone structure unchanged, and then calculate the energy. If you wish to evaluate the antibody's performance in real experiments (which implies the CDR region's structure might not maintain the designed configuration), you can use multi-chain supporting folding models like AlphaFold3 to predict the binding structure. When calculating energy, does the relaxation you used optimize only the side chain conformations, or does it also alter the main chain structure? If it's the latter, are these experiments intended to demonstrate that AbNovo can generate a better initial structure for Rosetta relaxation?\\n\\n3. Does the optimization of these physical properties contribute to some chemical validity? For example, does the peptide bond length get closer to the actual length?\\n\\n4. The standard deviation of the physical energy needs to be presented.\\n\\n5. The AAR performance is excessively high, and it's necessary to check whether the training data of the protein language model contains samples similar to the test set.\\n\\n6. I am curious about how many amino acids have mutated in those designed antibodies that outperform natural ones (at least in binding energy).\\n\\n7. The task setting of dyMEAN is different from others, including AbNovo. dyMEAN does not provide the real FR structure, making direct comparison somewhat unfair. Additionally, how is it achieved to use dyMEAN to generate 128 antibodies for an antigen?\\n\\n8. Calculating RMSD on the aligned structures seems somewhat unreasonable. Typically, for two rigid bodies that can freely undergo SE(3) transformations, alignment is performed first, followed by RMSD calculation. However, in the setting of this paper, the FR region is given, meaning the CDR region cannot undergo SE(3) transformations independently, thus requiring a direct RMSD calculation.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"**Weakness 8: Some claims appear exaggerated**\\n\\nWe have addressed these concerns in the revised manuscript. Specifically:\\n\\n1. We have removed the word \\u201cfirst\\u201d from the sentence (Line 71-73).\\n2. We have removed the claim that AbNovo is \\u201cbridging the gap between in silico design and practical application\\u201d from the paper (Line 81-82).\\n\\n**Weakness9: Typo \\u201cBolocks\\u201d in Figure 3**\\n\\nThanks for pointing out this typo. We have fixed it. \\n\\n**Question 1: In Section 4.2 you state that when \\u201cwe incorporated all constraints into the optimization objective by taking a weighted average\\u201d a \\u201cdrop in performance\\u201d is observable. However, the corresponding results show an improvement wrt. the \\u201cAll Constraints\\u201d metric. Could you elaborate on that?**\\n\\nFor a more accurate description, we have updated the statement in the manuscript to: \\u201cWe observed a slight increase in fulfilling all constraints but a significant drop in performance in Binding Energy and Evolutionary Plausibility.\\u201d (Line 442-444).\\n\\nWhen we incorporate all constraints into the optimization objective, the model struggles to balance these objectives effectively, often allowing one objective to dominate at the expense of others. While this ablation model achieves slightly better performance in fulfilling all constraints, it performs significantly worse in metrics such as Rosetta Binding Energy and Evolutionary Plausibility.\\n\\nA similar observation has been made in language model preference optimization, where a constrained preference optimization framework is shown to be more effective than a purely preference optimization framework in balancing user-helpful responses against safety concerns (e.g., avoiding offensive content) [1, 2].\\n\\n[1] Bianchi,et al. [Safety-tuned llamas: Lessons from improving the safety of large language models that follow instructions](https://arxiv.org/abs/2309.07875). ICLR 2023. \\n\\n[2] Liu, et al. [Enhancing LLM Safety via Constrained Direct Preference Optimization.](https://arxiv.org/abs/2403.02475) ICLR WorkShop 2024.\\n\\n**Question 2: In Table 2, we can observe that AbNovo (base) sometimes exhibits favorable scores than AbNovo. Is there a tradeoff between fulfilling constraints and achieved AAR/RMSD?**\\n\\nYes, as you observed, there is a slight trade-off between fulfilling constraints and achieving AAR/RMSD, which is consistent with findings in previous work [1]. In the base model, the training objectives focus on generating antibodies that closely align with native antibodies in terms of sequence and structure. After applying preference optimization, we found that a slight sacrifice in AAR and RMSD can lead to significant improvements in other metrics, such as binding affinity, evolutionary plausibility, and other important biochemical properties.. \\n\\n[1] Zhou, et al. [Antigen-Specific Antibody Design via Direct Energy-based Preference Optimization.](https://arxiv.org/pdf/2403.16576v1) NeurIPS 2024. \\n\\n**Question 3: Is there a reason dyMEAN is not included in Figure 4 and AbX not in Figure 2 respectively?**\\n\\nFigure 4 illustrates the distribution of designed antibodies for one specific antigen. Since dyMEAN cannot generate diverse antibodies for a given antigen, it is excluded from Figure 4.\\n\\nWe have now included AbX in Figure 2. The main reason we omitted AbX in the previous version was that including it would have made the figure look too busy.\\n\\nThank you for your constructive feedback on our paper. If you have any further concerns, please feel free to discuss them with us. We look forward to your response.\"}", "{\"comment\": \"**Question 7: The task setting of dyMEAN is different from others, including AbNovo. dyMEAN does not provide the real FR structure, making direct comparison somewhat unfair. Additionally, how is it achieved to use dyMEAN to generate 128 antibodies for an antigen?**\\n\\nWe agree that differences in task settings for dyMean can lead to unfair evaluations. We include dyMEAN in our comparison because it is a widely used method and a strong baseline in the literature. We have now explicitly stated that dyMEAN's experimental setup and the potential biases it may introduce in the main text (Line 391-394). \\n\\nAdditionally, we have clarified that dyMEAN does not generate diverse samples, and we only sample 128 antibodies for all generative models. In our previous submission, we did not show the distribution of designed antibodies for a specific antigen for dyMEAN due to the same reason (Figure 3, Line 920). \\n\\n**Question 8: Question about structure alignment.**\\n\\nIn our initial submission, we applied structural alignment for all methods when calculating RMSD metric.\\n\\nIn response to your concern, we have recomputed the RMSD metric without performing a structural alignment. As shown in the table below, the RMSD values with and without structural alignment exhibited only a very slight deviation (less than 0.005 \\u00c5). This small difference arises because the framework region dominates the majority of the antibody Fv region. AbNovo still outperforms all baseline methods.\\n\\n| | DIffAb | dyMEAN | AbX | GeoAb | AbNovo |\\n| --- | --- | --- | --- | --- | --- |\\n| RMSD with structural alignment | 2.86 | 3.88 | 2.49 | 2.57 | 2.37 |\\n| RMSD without structural alignment | 2.86 | 3.88 | 2.50 | 2.57 | 2.38 |\\n\\nWe have now updated the result tables (Table 5, Line 810).\\n\\nThank you for your constructive feedback on our paper. If you have any further concerns, please feel free to discuss them with us. We look forward to your response.\"}", "{\"comment\": \"Thank you for valuable suggestions, which have helped us significantly improve the quality of our manuscript. We have provided detailed responses point-to-point to your comments as follows. We hope that our responses and additional experiments address your concerns.\\n\\n**Weakness 1 : Rosetta energy is used as an alignment metric. It is well-known that forcefield energies have a weak correlation with measured binding affinity, typically around 0.3 [1,2]. This may lead to the totally wrong direction.**\\n\\nThank you for raising this important concern. We agree that forcefield energies are limited in measuring binding affinities. We would like to clarify our methods and the rationale for using forcefield energies as follows. \\n\\nFirst, our method does not rely solely on forcefield energies for optimization. Instead, we employ a unified framework of constrained optimization that integrates multiple objectives and constraints to guide the optimization process and prevent it from diverging in an incorrect direction. Specifically, \\n\\n1. We include the likelihood under a large-scale protein language model as an optimization objective, which has proven effective in improving antibody screening success rates in wet-lab experiments [1, 2]. \\n2. We incorporate constraints for other biophysical properties such as specificity and low self-association, which have also proven useful in guiding antibody design in experimental settings [3, 4].\\n3. During preference optimization, we introduce a regularization term in the training loss function (Equation 3, Line 236, Page 5) to ensure the fine-tuned model remains close to the base model, preventing arbitrary divergence.\\n\\nSecond, despite its limitations, Rosetta binding energy has been utilized as a screening metric, and several proteins designed using it have been experimentally validated in wet-lab experiments [5, 6]. Thus, binding energy is widely used as a metric for benchmarking recent generative models for antibody design, such as DiffAb [7], dyMEAN [8], AbX [9], and AbDPO [10].\\n\\nThird, the AbNovo framework is flexible and allows the inclusion of other constraints or optimization objectives important for antibody design. One of our key methodological contributions lies in building the framework of constrained preference optimization for antibody design, providing rigorous theoretical derivations and proofs. \\n\\nFinally, we acknowledge that, although forcefield energy is widely used as a metric in recent work, there remains a gap in its correlation with experimental results. We have included this point in the Discussion section (Line 533) to raise awareness in the community and inspire the development of improved methods.\\n\\n[1] Hie, et al. [Efficient evolution of human antibodies from general protein language models.](https://www.nature.com/articles/s41587-023-01763-2) Nature Biotechnology 2024. \\n\\n[2] Shuai, et al. [IgLM: Infilling language modeling for antibody sequence design](https://www.cell.com/cell-systems/fulltext/S2405-4712(23)00271-5). Cell System 2023.\\n\\n[3] Makowski, et al. [Optimization of therapeutic antibodies for reduced self-association and non-specific binding via interpretable machine learning.](https://www.nature.com/articles/s41551-023-01074-6) Nature Biomedical Engineering 2023. \\n\\n[4] Makowski, et al. [Co-optimization of therapeutic antibody affinity and specificity using machine learning models that generalize to novel mutational space.](https://www.nature.com/articles/s41467-022-31457-3) Nature Communications, 2023. \\n\\n[5] Cao, et al. [Design of protein-binding proteins from the target structure alone.](https://www.nature.com/articles/s41586-022-04654-9) Nature 2022. \\n\\n[6] Sun, et al. [Accurate de novo design of heterochiral protein\\u2013protein interactions.](https://www.nature.com/articles/s41422-024-01014-2#Sec9) Cell Research 2024. \\n\\n[7] Luo, et al. [Antigen-Specific Antibody Design and Optimization with Diffusion-Based Generative Models for Protein Structures.](https://openreview.net/forum?id=jSorGn2Tjg) NeurIPS 2022. \\n\\n[8] Kong, et al. [End-to-End Full-Atom Antibody Design](https://arxiv.org/abs/2302.00203). ICML 2023. \\n\\n[9] Zhu, et al. [Antibody Design Using a Score-based Diffusion Model Guided by Evolutionary.](https://openreview.net/pdf?id=1YsQI04KaN) ICML 2024. \\n\\n[10] Zhou, et al. [Antigen-Specific Antibody Design via Direct Energy-based Preference Optimization.](https://arxiv.org/pdf/2403.16576v1) NeurIPS 2024.\"}", "{\"comment\": \"We sincerely appreciate your valuable suggestions, which have helped us further improve the quality of our article. We have provided point-to-point responses to your comments as follows.\\n\\n**Weakness 1: Seems to be an updated version of AbDPO, somewhat heavier but showing better performance.**\\n\\nOur method differs with AbDPO in both the optimization framework and the underlying motivation.\\n\\n**AbDPO** employs direct preference optimization:\\n\\n$\\\\max \\\\ \\\\ \\\\ \\\\ \\\\sum_{i} R_i-\\\\beta \\\\mathrm{KL}(p_{\\\\theta} |p_{\\\\rm ref}) \\\\ \\\\ \\\\ \\\\ \\\\$Equation 1.\\n\\nIn contrast, **AbNovo** utilizes constrained preference optimization:\\n\\n$\\\\max \\\\ \\\\ \\\\ \\\\ \\\\ \\\\ \\\\sum_{i} R_i-\\\\beta \\\\mathrm{KL}(p_{\\\\theta} |p_{\\\\rm ref}) \\\\ \\\\ \\\\ \\\\ $Equation 2. \\n\\n${\\\\rm s.t.} \\\\ \\\\ \\\\ \\\\ C_j<C_{{\\\\rm limit},j} \\\\ \\\\ \\\\ \\\\ \\\\$ for $j$ in all constraint sets\\n\\nOur framework is novel in diffusion-based generative models and is supported by rigorous theoretical derivations and proofs.\\n\\nThe key motivation is that crucial biochemical properties in practical antibody development often present inherent trade-offs\\u2014for example, improving binding affinity may increase the risk of non-specific binding to non-target proteins [1, 2].\\nSimply combining multiple objectives, as in Equation 1, struggles to balance these trade-offs effectively, often allowing one objective to dominate at the expense of others. In contrast, the constrained preference optimization framework allows us to set thresholds for certain properties (e.g., specificity) while optimizing others (e.g., affinity). It enables dynamic adjustment of the relationship between 'objectives' and 'constraints' through the Lagrangian method, thereby mitigating trade-offs to some extent. \\n\\nIn our initial submission, we included an ablation study comparing the two optimization frameworks (see the \\\"Multi-objective\\\" row in Table 2, Line 474). In this study, we incorporated all constraints into the objective function as shown in Equation 1. The results demonstrate that AbNovo achieves better performance in Rosetta Binding Energy, Evolutionary Plausibility, RMSD, and AAR. In contrast, the ablation model tends to focus solely on satisfying the constraints, resulting in slight benefits in the metric of fulfilling all constraints but substantially sacrificing performance in other metrics.\\n\\nSimilar trade-offs are observed in language model preference optimization, balancing user-helpful responses against safety concerns (e.g., avoiding offensive content) [3, 4]. One of our contributions is extending the constrained optimization framework for language models to multimodal diffusion models, providing theoretical support for this extension.\\n\\n[1] Makowski, et al. [Optimization of therapeutic antibodies for reduced self-association and non-specific binding via interpretable machine learning.](https://www.nature.com/articles/s41551-023-01074-6) Nature Biomedical Engineering 2023. \\n\\n[2] Makowski, et al. [Co-optimization of therapeutic antibody affinity and specificity using machine learning models that generalize to novel mutational space.](https://www.nature.com/articles/s41467-022-31457-3) Nature Communications, 2023.\\n\\n[3] Bianchi,et al. [Safety-tuned llamas: Lessons from improving the safety of large language models that follow instructions](https://arxiv.org/abs/2309.07875). ICLR 2023.\\n\\n[4] Liu, et al. [Enhancing LLM Safety via Constrained Direct Preference Optimization.](https://arxiv.org/abs/2403.02475) ICLR WorkShop 2024.\\n\\n**Weakness 2: The task setting is overly simplistic. Although the structure of the antibody's FR region is relatively conserved and can be considered known, the binding pose between the antibody and antigen is typically unknown. However, given that the main goal of this work is to propose a new method for antibody optimization, this limitation is understandable.**\\n\\nThank you for your insightful comment regarding the limitation of our method. When the binding pose between the antibody and antigen is unknown, recent approaches in antibody design [1] utilize protein structure prediction and docking software to determine the pose before designing antibodies. In response to your concern, we conducted a new experiment where we first establish the binding pose following the strategy used in these methods, and then design the CDR regions with AbNovo.\\n\\nAs demonstrated in the table below, AbNovo continues to outperform other methods in this application scenario. We have included this table in Appendix (Table 9, Line 886).\\n\\n| | AAR H3 | RMSD H3 | Rosetta Binding Energy | Evolutionary Plausibility | Constraints |\\n| --- | --- | --- | --- | --- | --- |\\n| dyMEAN | 0.37 | 3.88 | -1.75 | 2.82 | 94.5% |\\n| AbX | 0.40 | 2.83 | 11.22 | **2.54** | 39.9% |\\n| AbNovo | **0.44** | **2.59** | **-5.81** | **2.54** | **25.5%** |\\n\\n[1] Luo, et al. [Antigen-Specific Antibody Design and Optimization with Diffusion-Based Generative Models for Protein Structures.](https://openreview.net/pdf?id=jSorGn2Tjg) NeurIPS 2022.\"}", "{\"comment\": \"Thanks for the response, and I have raised my score.\"}" ] }
4jzjexvjI7
Regret measure in continuous time limit for a stochastic Multi-armed bandit problem
[ "Sabrine Chebbi", "Sofien Dhouib", "Setareh Maghsudi" ]
We study a class of stochastic multi-armed bandit problems with a risk-sensitive regret measure within a continuous limit setting. This problem is interesting when optimizing the expected reward is not the foremost objective, and the problem horizon is long. Through scaling the state parameters, including the number of pulls and cumulative reward for each arm we study the bandit problem with infinite horizon, we delineate such risk using a Hamilton-Jacobi-Bellman equation with quadratic growth. Using this approach, we establish an explicit form of the optimal policy associated with the considered risk. As an application, we present examples where the results obtained in continuous time offer insights into the optimal policy for each case. Finally, numerical experiments confirm the theoretical results are presented.
[ "Stochastic multi-armed bandit", "Risk-sensitive regret", "Hamilton-Jacobi-Bellman equation", "Continuous time-limit" ]
Reject
https://openreview.net/pdf?id=4jzjexvjI7
https://openreview.net/forum?id=4jzjexvjI7
ICLR.cc/2025/Conference
2025
{ "note_id": [ "x0jCvLhlv7", "f36yPoEwpZ", "YyNeZt1T5r", "6akks5jzIj", "4ZcgAcbFgs" ], "note_type": [ "official_review", "decision", "official_review", "meta_review", "official_review" ], "note_created": [ 1729938471267, 1737523647836, 1730209256589, 1734369935045, 1729892777801 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission4560/Reviewer_Z8Gu" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission4560/Reviewer_TCho" ], [ "ICLR.cc/2025/Conference/Submission4560/Area_Chair_DrG3" ], [ "ICLR.cc/2025/Conference/Submission4560/Reviewer_gWB4" ] ], "structured_content_str": [ "{\"summary\": \"This paper studies a class of stochastic multi-armed bandit problems with a risk-sensitive regret measure within a continuous limit setting\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": \"Considering continuous-time limit of regret measures in continuous time.\", \"weaknesses\": \"The presentation is not clear.\\n\\nThe paper's contribution and the significance of the problem are not clearly articulated in the Introduction and the main text. \\n\\nThe English in the paper could benefit from some further refinement or editing to enhance clarity and coherence.\", \"questions\": \"1. what is the main contribution of the paper?\\n\\n2. what is exactly the problem studied?\\n\\n3. why studying the continuous-time limit is relevant for bandit problems?\\n\\n4. How should we interpret the main result Theorem 1 and understand its practical relevance?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"This paper aims to analyze multi-armed bandit problems using differential equations and introduces a new risk measure for the analysis.\", \"soundness\": \"1\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": \"I am unable to provide a comprehensive scientific review of the paper, and thus I cannot identify specific strengths. Please refer to the weaknesses below.\", \"weaknesses\": \"The paper has significant issues with presentation. Not only are there numerous grammatical errors, typos, and punctuation mistakes, but many sentences are incomplete and seem disconnected from the surrounding context. Additionally, the writing lacks a clear logical flow, making it difficult to follow the argument.\\n\\nFurthermore, it appears that the authors have not adhered to the official ICLR style guidelines.\\n\\nDue to these issues, I am unable to provide a more detailed review.\", \"questions\": \"Please refer to the weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"1\", \"confidence\": \"1\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"The reviewers struggled with identifying the high-level approach of this paper. The ratings are overly harsh in my opinion but the paper definitely needs a thorough revision to get it into a publishable state. That includes simple things like polishing the text, but it also would help to motivate and highlight the contributions of the paper early on.\", \"additional_comments_on_reviewer_discussion\": \"Unfortunately, no rebuttal was submitted by the authors.\"}", "{\"summary\": \"This paper considers the traditional multi-armed bandit problem with a new risk measure. The authors continuize the time through rescaling and use PDE to find the optimal policy. In the meantime, the authors use some simulations to verify their results.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"The way to convert the MAB problem to a PDE problem is interesting and meaningful. The work compares different concepts, like frequentist and Bayesian settings making it easy to understand the applicability of the method.\", \"weaknesses\": \"1. The writing needs to be improved. There are a lot of typos which make it hard to understand the paper.\\n\\n2. There are no real-world applications provided by the author regarding why this new risk measure is important, reducing the credibility and impact of the paper.\\n\\n3. The usage of MDP seems improper. In your setting, $\\\\nu$ seems to be fixed and only $s$ and $q$ are changing. However, there is no need to learn the transition kernel as if you choose an action $a$, corresponding $q$ will be increased by 1. Then, it reduces to learning the reward function which is the same as in traditional MAB literature and so people usually don't call it MDP. It's more reasonable to use your framework to consider the case that $\\\\nu$ is varying and say it's MDP.\\n\\n4. The notations are messy. For example, why $V_{i+1}$ only relies on $R_i$? And you use a very strong assumption but only hide it in the Lemma 1.\\n\\n5. The Theorem 1 is unclear. What is zero? Why do you use a bracket but link it to nothing?\\n\\n6. In your numerical study, how do you implement UCB and TS? Do you adjust their definitions of regrets to your new risk measure? If not, they are not comparable. Otherwise, it's better to mention how you set the baseline in detail.\", \"questions\": \"See Weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
4jBJ6JphYM
Procedural Fairness Through Addressing Social Determinants of Opportunity
[ "Zeyu Tang", "Alex John London", "Peter Spirtes", "Kun Zhang" ]
_Social determinants of opportunity_ are variables that, while not directly pertaining to any specific individual, capture key aspects of contexts and environments that have direct causal influences on certain attributes of an individual, e.g., environmental pollution in an area affects individual's health condition, and educational resources in an neighborhood influence individual's academic preparedness. Previous algorithmic fairness literature often overlooks _social determinants of opportunity_, leading to implications for procedural fairness and structural justice that are incomplete and potentially even inaccurate. We propose a modeling framework that explicitly incorporates _social determinants of opportunity_ and their causal influences on individual-level attributes of interest. To demonstrate theoretical perspectives and practical applicability of our framework, we consider college admissions as a running example. Specifically, for three mainstream admission procedures that have historically been implemented or are still in use today, we distinguish and draw connections between the outcome of admission decision-making and the underlying distribution of academic preparedness in the applicant population. Our findings suggest that mitigation strategies centering solely around protected features may introduce new procedural unfairness when addressing existing discrimination. Considering both individual-level attributes and _social determinants of opportunity_ facilitates a more comprehensive explication of benefits and burdens experienced by individuals from diverse demographic backgrounds as well as contextual environments, which is essential for understanding and achieving procedural fairness effectively and transparently.
[ "Procedural Fairness", "Social Determinants of Opportunity", "Causal Fairness", "Structural Justice" ]
Reject
https://openreview.net/pdf?id=4jBJ6JphYM
https://openreview.net/forum?id=4jBJ6JphYM
ICLR.cc/2025/Conference
2025
{ "note_id": [ "r16vQfFAVQ", "q6CnADPZQp", "mEknoMPopj", "iolGtFn74e", "gWCuzfHSUb", "aD6F01T9ex", "Z7NQrFXR1B", "YUlkvdSFyH", "QUo5TEyBCA", "Iy1aqmorNq", "HoHC2ZoD14", "D0gOlXABd0", "CRlP86KZe5", "9sXhKHy6Vz", "7QajXFV5rY", "67LE78p0f4", "5VjCDht4PQ", "19Ul0jG1hX" ], "note_type": [ "official_review", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_review", "official_comment" ], "note_created": [ 1730589153619, 1733783314563, 1732497788094, 1732560080640, 1732634200441, 1732556870776, 1732573850537, 1730721462746, 1732497454993, 1732497539844, 1732707175971, 1732916509152, 1733100182697, 1732497151627, 1732497204267, 1737523643750, 1730083313747, 1732583295412 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission4488/Reviewer_8F2j" ], [ "ICLR.cc/2025/Conference/Submission4488/Area_Chair_Lwxg" ], [ "ICLR.cc/2025/Conference/Submission4488/Authors" ], [ "ICLR.cc/2025/Conference/Submission4488/Authors" ], [ "ICLR.cc/2025/Conference/Submission4488/Reviewer_h9Xa" ], [ "ICLR.cc/2025/Conference/Submission4488/Reviewer_8F2j" ], [ "ICLR.cc/2025/Conference/Submission4488/Reviewer_8F2j" ], [ "ICLR.cc/2025/Conference/Submission4488/Reviewer_g5W4" ], [ "ICLR.cc/2025/Conference/Submission4488/Authors" ], [ "ICLR.cc/2025/Conference/Submission4488/Authors" ], [ "ICLR.cc/2025/Conference/Submission4488/Authors" ], [ "ICLR.cc/2025/Conference/Submission4488/Authors" ], [ "ICLR.cc/2025/Conference/Submission4488/Authors" ], [ "ICLR.cc/2025/Conference/Submission4488/Authors" ], [ "ICLR.cc/2025/Conference/Submission4488/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission4488/Reviewer_h9Xa" ], [ "ICLR.cc/2025/Conference/Submission4488/Authors" ] ], "structured_content_str": [ "{\"summary\": \"This paper develops a model of the interactions between ethnicity, academic preparedness, and \\\"social determinants of opportunity\\\", which capture socio-geographic influences on academic preparedness. The model is used to study different college admissions policies both in theory, and applied to a dataset of UC Berkeley admissions.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"The paper is clearly arranged and easy to follow. This paper is also laudable for trying to raise the salience of geographic and community influences on opportunity over a reductive focus on ethnicity.\", \"weaknesses\": \"UPDATE: After the discussion period I stand by these weakness of the paper.\\n\\nUltimately, the theoretical results in this paper do not provide novel insight, and the empirical results aren't very interesting or plausible. The model is used to show that:\\n\\n1. Quota-based affirmative action harms disadvantaged members of majority groups.\\n2. \\\"Plus factors\\\" for being from an underrepresented group benefit advantaged members of that group more than disadvantaged members.\\n3. \\\"Top-percent\\\" policies that are blind to ethnicity reallocate opportunity to regions with less of it.\\n\\nAll three of these findings are well-known, and have been part of the debate around these policies for decades. The model doesn't provide extra insights into these policies. \\n\\nWhen the model is deployed on real data, it also doesn't provide insights. It seems as though the admissions data from Berkeley is too censored to study the impacts of social determinants of opportunity, since it doesn't include anything about an applicant's geography beyond whether they are in-state. The regions inferred by the model don\\u2019t make a lot of sense given what we know about California\\u2019s ethnic geography (eg they don't show any signs of the racial segregation induced by California's restrictive housing policies).\", \"questions\": \"Are the regions in the experiment latent? Region 1 and 3 are almost identical, calling into question the identifiability of the model. Also, if regions don\\u2019t correspond to geographies or social networks then how can they capture social determinants of opportunity?\", \"notes\": \"> \\u201cSpecifically, by definition of causality, this edge asserts that there is a difference in the distribution of education status, when we \\u201cintervene\\u201d on individual\\u2019s race while keeping all other things unchanged\\u201d\\n\\n\\u201call other things\\u201d meaning every other variable in their causal graph, not literally every other possible thing. Since these graphs typically only use a handful of features, I don\\u2019t think this edge is an endorsement of racial essentialism - it just summarizes dozens of effects that the model is too coarse to model explicitly.\\n\\n>\\u201cIf a certain edge or path in the causal model does not reflect an actual real-world causal process, subsequent causal fairness analyses based on causal effects may not provide informative conclusions.\\u201d\\n\\nThis is certainly true, but to my knowledge not a single causal graph in history has ever actually described a real-world causal process where humans were involved. It\\u2019s extremely difficult to establish single treatment effects, let alone a network of them.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"The paper addresses procedural fairness by modeling social determinants of opportunity and their causal influence on individual attributes, such as academic preparedness. Using college admissions as a central example, the authors explore various admissions policies and their potential impacts on fairness. They argue that accounting for social determinants of opportunity can mitigate procedural unfairness, which is often overlooked by traditional fairness approaches that focus solely on protected attributes like race or gender.\\n\\nOverall, the reviewers agree that the research question is important and acknowledge the authors' efforts in addressing it. However, concerns were raised about the (potentially) limited empirical contributions, reliance on a US-centric legal framework, and the lack of generalizable findings beyond college admissions. While the authors provided responses to the reviewers' concerns, including discussions of the modeling contribution, insights from the modeling framework, and the generalizability of the findings, the reviewers\\u2019 opinions remain largely unchanged. Therefore, we recommend rejecting the paper in its current form but hope the authors find the reviewer comments helpful for future revisions.\", \"additional_comments_on_reviewer_discussion\": \"The authors provided responses to the reviewers' concerns, including discussions of the modeling contribution, insights from the modeling framework, and the generalizability of the findings, the reviewers were not entirely convinced and their opinions remain largely unchanged.\"}", "{\"title\": \"Response to Reviewer h9Xa\", \"comment\": \"We are very grateful for your constructive and insightful comments, and for the time and effort devoted! We have provided a revised manuscript, where we use blue font to indicate added/revised material. Below please see our responses to specific points in the review comment:\\n\\n### **C1:** \\\"While there are limitations in studying observational data, this paper could have benefitted from a further analysis of the University of California dataset\\\"\\n\\n**A1:** Thanks for the insightful comment. Following your suggestion, we have included an additional section Appendix C.1 to present description of the UC data and further analysis on the results, together with a side note `Re: C1 by Reviewer h9Xa` on page 24 to help locate the material.\\n\\n---\\n\\n### **C2:** \\\"Analysis on other datasets could have provided more support to the experimental section of this paper\\\"\\n\\n**A2:** Thanks for the constructive suggestion. We have included additional data analyses on the US Census data in Appendix C.2, along with a side note `Re: C2 by Reviewer h9Xa` on page 26.\\n\\n---\\n\\n### **C3:** \\\"Figure 3 could have benefitted from further discussion, particularly in relation to each other and how the correlation between race and social [determinants] of opportunity correlates with understanding academic preparedness in a region\\\"\\n\\n**A3:** Thanks for the constructive comment. We follow the suggestion and have add further discussions in Appendix C.1.3 (with a side note `Re: C3 by Reviewer h9Xa` on page 25) and also provided the pointer in main text (footnote 6).\\n\\n---\\n\\n### **C4:** \\\"Potentially interleaving the methods and providing experimental results for the modeling of past admissions systems could have provided more tractable examples of how this framework compares to prior work.\\\"\\n\\n**A4:** Thank you for the thoughtful comment. While we present theoretical analyses followed by experimental results (instead of interchangeably), we provide accompanying illustrative figures (Figure 2) for the theoretical analysis of each kind of policies (Theorems 4.5 -- 4.7), to explicate the implications of different strategies.\\n\\nPrior works in algorithmic fairness typically drop the information that is relevant to social determinants of opportunity (among other potential issues, as presented in Section 3.1). When such information is dropped, it is not immediate (if possible) for previous works to provide similar nuanced analyses.\\n\\n---\\n\\n### **Q5:** \\\"How does modeling University of California admissions data via the presented framework differ experimentally from past methods?\\\"\\n\\n**A5:** Thanks for the question. Our framework differs experimentally from previous methods in both the handling of data and the question intended to answer.\\n\\n- Since social determinants of opportunity correlate with protected features, we do not drop variables that seem to be unrelated to the decision-making, but in fact capture influence from the contextual environment to the individual (e.g., the address). We discuss this point in Section 3.1 (side note `Re: Q5.1 by Reviewer h9Xa` on page 4).\\n\\n- Apart from data handling, our framework also differs from previous approaches in the questions we intended to address. Specifically, we aim to characterize and address social determinants of opportunity when achieving procedural fairness, which cannot be simplified into merely considering the protected features at the individual level. In light of your question, we have included the above discussion in more detail in Appendix A.2, together with a side note `Re: Q5.2 by Reviewer h9Xa` on page 19.\\n\\n---\\n\\n### **Q6:** \\\"What are possible extensions of this framework in developing more holistic admissions systems?\\\"\\n\\n**A6:** Thanks for the question about the next steps and how to go further. We use the college admission as a concrete example, but our advocacy for addressing _social determinants of opportunity_ to achieve procedural fairness has various implications to a broader scope (which goes beyond admission decision-making).\\n\\nFor example, from the college perspective, outreach programs in community can be helpful especially when the educational resource is scarce in the area. From the social policy perspective, investments to improve living and environmental conditions can positively affect people's overall health and better avail them to pursue diverse life plans.\"}", "{\"title\": \"Follow-up to Reviewer 8F2j\", \"comment\": \"Thanks for clarifying Q3. In the additional experimental analyses on the US census data, the PUMAs are regions defined by known geographies or social network. The material can be found in Appendix C.2 of **our revised manuscript**.\\n\\n---\\n\\nWe believe there are still some misunderstandings. Please kindly allow us to clarify:\\n\\n### **a)** \\\"the findings of the model being well-known\\\"\\n\\n**Re: a)** We understand that similar arguments exist in debates, e.g., in legal cases. However, we would like to note that\\n\\n1. Our work of precisely capture contextual influences via causal modeling is novel, **there is no overlap with previous algorithmic fairness literature**\\n\\n1. Due to the issues of previous modeling (Section 3.1), **previous causal fairness approaches do not produce the nuanced analyses facilitated by our framework**\\n\\n1. The algorithmic fairness literature is naturally multi-disciplinary, many of the ideas often echo wisdom from other discipline (philosophy, legislation, sociology, etc.). The fact that our model provides quantitative and transparent findings, that are aligned with arguments in other disciplines, **demonstrate the value of our approach, especially these findings are not produced by previous causal fairness approaches**.\\n\\n---\\n\\n### **b)** \\\"the application of the model to real data being uncompelling (due to the lack of granular data)\\\"\\n\\n**Re: b)** In our revised manuscript, we have **followed your original suggestion and provided extensive analyses of the US Census data, where granular data is available**. We have added the material in Appendix C.2, along with a side note on page 26 to help locate the content.\\n\\n---\\n\\nPlease kindly let us know if **our revised manuscript** and the clarification help address your concern. Thanks again for your time and careful review.\"}", "{\"comment\": \"Thank you for providing additional details. The disussion about Figures 3/4 in the appendix is clarifying and I would encourage parts of the explanation to be moved to the main text.\\n\\nAs another reviewer mentioned, Appendix C2 seems to be a separate analysis of census data that doesn't relate to the model presented in the paper. I was more so curious how well the presented model extrapolates to other datasets, though I understand how the analysis seeks to frame using social determinants of opportunity more broadly.\"}", "{\"comment\": \"Thanks for the response.\\n\\nTo clarify Q3, I was asking how, given that social determinants of opportunity are mostly defined in terms of geography and the social networks people are in, we can say that the latent regions in the experiment (as opposed to regions defined by known geographies or social networks) capture social determinants of opportunity.\\n\\nAt the moment I stand by my review/score based on: a) the findings of the model being well-known, and b) the application of the model to real data being uncompelling (due to the lack of granular data).\"}", "{\"comment\": \"It looks like the census data in the Appendix is just a discussion of how different PUMAs differ? Is the model in this paper applied at all to this data?\\n\\nAnd unfortunately I don't see the causal modeling in this paper to be a major contribution. In practice it's an additional binary variable (poor region vs rich) and an additional conditional independence statement. Conceptually it's nice to consider sociogeographic factors in this sort of analysis but in practice this is a simple extension of previous work.\"}", "{\"summary\": \"This paper discusses the consideration of \\\"social determinants of opportunity\\\" such as geographical locations for algorithmic fairness.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The discussion of the actual effects of different approaches to achieve \\\"fairness\\\" is discussed, which is often not considered enough in our field.\", \"weaknesses\": \"First of all, I am not sure if this conference is a good fit for this paper/topic since it is from my perspective hardly at all concerned with \\\"learning representation\\\". It is more a general societal consideration about how fairness could be achieved.\\n\\nWhile the claim of the paper is to discuss the \\\"social determinants of opportunity\\\" in general, the discussion focusses very much on a single use case, i.e., university admissions.\\n\\nThe paper is written in a very US-centric way, specifically considering the legal situation.\\n\\nThe case considerations in Section 4 often (e.g., Section 4.3.) come to conclusions that are quite trivial. E.g., that taking the top-x % per region increases the share of \\\"weaker\\\" regions was literally my first thought. The accompanying formulas appear to just make a trivial insight more sophisticated.\\n\\nThe authors should more critically reflect on their approach. For example,\\n(i) even if \\\"academic preparedness\\\" is caused by certain external factors, isn't academic preparedness still a key factor to a succesful university curriculum? If someone is not well prepared for university, they should not be admitted - that should be at the core of all admission procedures\\n(ii) the legal implications of adjusting for \\\"social determinants of opportunity\\\" should be considered, specifically if this correlates with sensitive attributes such as race. \\n(iii) trying to form groups again beyond sensitive attributes - again - introduces new sources of unfairness. For example \\\"poor\\\" students growing up in \\\"rich\\\" regions. Also, if this kind of admission procedure would gain traction, it would also be possible to trick procedures, e.g. for \\\"rich\\\" people renting temporarily to appear to be from a \\\"poor\\\" region.\\n\\nThe assumptions in the paper appear to be somewhat arbitrary. E.g., why assume the gamma parameterization in 4.3., and how is this justified?\\n\\nThe writing of the paper should also be improved. For example, what is the purpose of Section 2.1. For the contents of the paper?\", \"questions\": \"What are the authors thoughts of the critical reflection of the approach (see weaknesses)?\\nWhy is this paper a good fit for specifically ICLR?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer 8F2j [1 out of 2]\", \"comment\": \"Thank you for the thoughtful questions and comments, and for the time devoted! We have provided a revised manuscript, where we use blue font to indicate added/revised material. Below please see our responses to specific points in the review comments:\\n\\n### **C1:** \\\"All three of these findings are well-known, and have been part of the debate around these policies for decades. The model doesn't provide extra insights into these policies.\\\"\\n\\n**A1:** Thanks for carefully consider our theoretical results. We respond in threefold:\\n\\n- To begin with, to the best of our knowledge, our modeling that explicitly considers the contextual influence on individuals is novel, and there is no overlap with previous algorithmic fairness literature. Technically speaking, previous causal fairness approaches do not have the capacity to directly produce the same set of \\\"well-known\\\" arguments, because of the issues we discussed in Section 3.1, e.g., the region is typically not on the radar.\\n\\n- Furthermore, apart from the phenomena summarized by these findings, our causal modeling aims to characterize the reason behind. Social determinants of opportunity and their nontrivial role in the pursuit of procedural fairness, as you kindly commented, go beyond a reductive focus on ethnicity.\\n\\n- In addition, algorithmic fairness is naturally an inter-disciplinary research. We do not view \\\"findings being well-known and part of debate over policies\\\" as a shortcoming. The quantitative analyses facilitated by our causal modeling very much align with domain knowledge in debates from related disciplines. This actually indicates that our model is able to present important findings in a precise and transparent way.\\n\\n---\\n\\n### **Q2:** \\\"Are the regions in the experiment latent? Region 1 and 3 are almost identical, calling into question the identifiability of the model.\\\"\\n\\n**A2:** Thanks for the insightful question. Yes, the regions are latent in our empirical results, and the optimization problem can indeed be under-constrained when the publicly available data contains only summary statistics (which is intentional for legal and ethical reasons).\\n\\nHowever, this is not a technical barrier to apply our framework in practice. When the practitioner has access to the whole data set (e.g., the university's internal research or audit), the region is no longer a latent variable (e.g., directly obtained from address of home or the attended school).\\n\\nIn the revised manuscript, we have added extensive analyses on the US census data, where the region information is readily available. The related material can be found in Appendix C.2, along with a side note `Re: Q2 by Reviewer 8F2j` on page 26.\\n\\n---\\n\\n### **Q3:** \\\"If regions don\\u2019t correspond to geographies or social networks then how can they capture social determinants of opportunity?\\\"\\n\\n**A3:** By definition, geographical regions correspond to geographical locations, and the pattern of social networks (among people that live and/or operate on the location) differ across regions. Please kindly let us know if we accidentally misunderstood your question.\\n\\nMeanwhile, we would like to clarify that we are using region as a surrogate for social determinants of opportunity, and we demonstrate that our model facilitates more nuanced analyses than using the protected feature to enclose all these related correlations (which is the more-or-less the default modeling choice in the algorithmic fairness literature). If there exist better measurements of social determinants of opportunity, our framework naturally incorporates them into the analyses.\\n\\n---\\n\\n(continuing)\"}", "{\"title\": \"Response to Reviewer 8F2j [2 out of 2]\", \"comment\": \"(continued)\\n\\n---\\n\\n### **C4:** \\\"I don\\u2019t think this edge is an endorsement of racial essentialism - it just summarizes dozens of effects that the model is too coarse to model explicitly.\\\"\\n\\n**A4:** There might be some misunderstandings. We did **not** claim that the edge is an endorsement of racial essentialism. Instead, we are worried about the unintentional alignment in implications with racial essentialism, as an (unintentional) outcome of the seemingly neutral technical choice.\\n\\nFurthermore, as you kindly pointed out, such edge summarizes dozens of effects that previous frameworks are too coarse to model explicitly. If the modeling itself is too coarse, the precision and comprehensiveness needed for analyzing the legal/societal implication and policy intervention will face additional challenges. This is exactly part of motivation behind our framework to address this issue and make the modeling more fine-grained.\\n\\n---\\n\\n### **C5:** \\\"[The authors' claim] is certainly true, but to [reviewer\\u2019s] knowledge not a single causal graph in history has ever actually described a real-world causal process where humans were involved. It\\u2019s extremely difficult to establish single treatment effects, let alone a network of them.\\\"\\n\\n**A5:** Thanks for sharing your insights.\\n\\nWe agree that causal graph has its limitations. However, recent advances in causal discovery and causal representation learning suggest that under mild assumptions, we may be able to discover causal relations among both observed and latent causal variables with identifiability guarantees (see, e.g., Xie et al. 2020, Sch\\u00f6lkopf et al. 2021, Huang et al. 2022, Dong et al. 2024, Zhang et al. 2024). As the discovery methods identify more and more latent causal factors that are essential and necessary, we can get closer and closer to the underlying true causal process.\\n\\nAt the same time, we expect and sincerely hope that ideas and methods regarding how to achieve fairness can be developed in parallel with causal discovery and causal representation learning, such that these research efforts can inspire and benefit each other.\\n\\n---\\n\\n### **References**\\n\\nDong, X., Huang, B., Ng, I., Song, X., Zheng, Y., Jin, S., Legaspi, R., Spirtes, P. & Zhang, K. (2024). A versatile causal discovery framework to allow causally-related hidden variables. International Conference on Learning Representations.\\n\\nHuang, B., Low, C. J. H., Xie, F., Glymour, C., & Zhang, K. (2022). Latent hierarchical causal structure discovery with rank constraints. Advances in Neural Information Processing Systems, 35, 5549-5561.\\n\\nSch\\u00f6lkopf, B., Locatello, F., Bauer, S., Ke, N. R., Kalchbrenner, N., Goyal, A., & Bengio, Y. (2021). Toward causal representation learning. Proceedings of the IEEE, 109(5), 612-634.\\n\\nXie, F., Cai, R., Huang, B., Glymour, C., Hao, Z., & Zhang, K. (2020). Generalized independent noise condition for estimating latent variable causal graphs. Advances in Neural Information Processing Systems, 33, 14891-14902.\\n\\nZhang, K., Xie, S., Ng, I., & Zheng, Y. (2024). Causal representation learning from multiple distributions: A general setting. International Conference on Machine Learning.\"}", "{\"title\": \"Thank Reviewer h9Xa for the Feedback\", \"comment\": \"Thank you for the feedback, and for the constructive suggestion, many of which help us further improve our manuscript. We will follow your suggestion and move part of the material to main text.\\n\\nThe Appendix C.2 is actually very related to our proposed framework. We followed your suggestion (C2 of the original comment) and considered real-world data where the region information (e.g., the PUMA in census data) is not latent and readily available. The goal of these analyses is to demonstrate on real-world data that the region information, which is often dropped by previous approaches, actually contains rich implications of various contextual influences in different regions. Therefore, Appendix C.2 provides further practical evidence that supports our advocacy for explicit considerations of _social determinants of opportunity_.\\n\\nPlease let us know if there is any remaining question or concern. Thanks again for the constructive suggestions.\"}", "{\"title\": \"Looking Forward to Feedback from Reviewer g5W4\", \"comment\": \"Thank `Reviewer g5W4` for the detailed and thoughtful comments. We have prepared **point-by-point responses**, and a **revised manuscript** together with **color-coded side notes** to help locate the related material.\\n\\nAs the reviewer-author discussion phase quickly approaching an end, we are very eager to know if the clarifications and additional materials help address the questions and comments, and especially, potential misunderstandings.\\n\\nThanks again for the time and effort devoted. We are eagerly looking forward to your feedback.\\n\\nSincerely,\\n\\nAuthors of `Submission 4488`\"}", "{\"title\": \"Still Waiting for Feedback from Reviewer g5W4\", \"comment\": \"Dear `Reviewer g5W4`,\\n\\nThanks again for your thoughtful and detailed comments.\\n\\nAs we replied in our **point-by-point responses**, our work is very relevant to ICLR as per the call-of-paper. We have also provided a **revised manuscript with further discussions and additional experiments**, together with **color-coded sidenotes** to help locate relevant materials. As the discussion phase quickly coming to an end, we will be very grateful for an opportunity to engage in a conversation. We look forward to your feedback, and we are eager to understand if the original questions and concerns are resolved.\\n\\nYours sincerely,\\n\\nAuthors of Submission 4488\"}", "{\"title\": \"Response to Reviewer g5W4 [1 out of 2]\", \"comment\": \"Thanks for the thoughtful and detailed comments, as well as the time and effort devoted! We have provided a revised manuscript, where we use blue font to indicate added/revised material. Below please also see our responses to specific comments and questions:\\n\\n### **Q1:** \\\"Why is this paper a good fit for specifically ICLR?\\\"\\n\\n**A1:** Thanks for the question. According to [ICLR 2025 Call for Papers](https://iclr.cc/Conferences/2025/CallForPapers), the non-exhaustive list of relevant topics include \\\"societal considerations including fairness, safety, privacy.\\\" Our paper is about procedural fairness (\\\"a societal consideration about how fairness could be achieved\\\" as you kindly summarized), which is very relevant to ICLR.\\n\\n---\\n\\n### **C2:** \\\"While the claim of the paper is to discuss the 'social determinants of opportunity' in general, the discussion focuses very much on a single use case, i.e., university admissions.\\\"\\n\\n**A2:** Thanks for asking about the scope of the discussion. We strive to balance between a broad discussion and a case study. We believe a concrete empirical setting would be helpful to demonstrate the nuanced analyses our framework facilitates, which can be applied to more general practical scenarios other than university admission.\\n\\nIn light of your comment, in Appendix A.4, we have added discussions on the role played by _social determinants of opportunity_ in various scenarios, including health, education, and employment, along with a side note `Re: C2 by Reviewer g5W4` on page 20 to help locate the related material.\\n\\n---\\n\\n### **C3:** \\\"The paper is written in a very US-centric way, specifically considering the legal situation.\\\"\\n\\n**A3:** Thanks for the comment. We would like to respond in twofold: (1) why we pay special attention to US legal cases, and (2) the implication of our framework is not only limited to US.\\n\\n- We pay special attention to US legal cases in part because of the clear trajectory of jurisprudence, many of which reached the US Supreme Court. Since algorithmic fairness is naturally a cross-disciplinary topic, we aim to provide a new perspective through the explicit causal modeling of _social determinants of opportunity_ for procedural fairness.\\n\\n- The implication of our framework is not only limited to US. As pointed out in previous literature (see, e.g., Sowell, 2004), quotas and group preferences (although under a variety of names) have existed in various countries with different histories and traditions. Incorporating social determinants of opportunity in procedural fairness analysis can potentially help address different scenarios that share very similar characteristics.\\n\\nIn light of your comment, we have included the above discussion in Appendix A.4, along with side node `Re: C3 by Reviewer g5W4` on page 20.\\n\\n---\\n\\n### **C4:** \\\"The case considerations in Section 4 often (e.g., Section 4.3.) come to conclusions that are quite trivial ... The accompanying formulas appear to just make a trivial insight more sophisticated.\\\"\\n\\n**A4:** Thanks for sharing your thoughts that our findings are not that surprising. We do not view not-being-surprising as a shortcoming, especially when previous algorithmic fairness approaches (e.g., causal fairness) do not produce such conclusions, because of the issues discussed in Section 3.1.\\n\\nFurthermore, in addition to the unintended consequences of certain policies, our framework also facilitates quantitative and causal analyses that aim to uncover the reason behind these phenomena. Instead of \\\"what might or might not happen\\\", our framework precisely quantifies what will definitely happen under the scenario. The fact that our findings align with domain knowledge actually demonstrates the value of our approach towards procedural fairness.\\n\\n---\\n\\n(continuing)\"}", "{\"title\": \"Response to Reviewer g5W4 [2 out of 2]\", \"comment\": \"(continued)\\n\\n---\\n\\n### **C5.1:** Critical Reflection - \\\"Even if 'academic preparedness' is caused by certain external factors, isn't academic preparedness still a key factor to a successful university curriculum?\\\"\\n\\n**A5.1:** Yes, you are totally right. This is exactly why our framework presents **no** objection to the importance of academic preparedness itself. Instead, we aim to address the issue of attributing discrimination only through the simplified relationship between race and the academic preparedness (as in previous causal fairness approaches). We argue that social determinants of opportunity, while not being individual-level attributes, correlate with race and should be considered explicitly to achieve procedural fairness.\\n\\n---\\n\\n### **C5.2:** Critical Reflection - \\\"The legal implications of adjusting for 'social determinants of opportunity' should be considered, specifically if this correlates with sensitive attributes such as race.\\\"\\n\\n**A5.2:** Thanks for the constructive comment. We totally agree, and this is actually part of the reason behind our special attention to the clear trajectory of legal cases.\\n\\nIn terms of legal implication of \\\"adjusting for social determinants of opportunity\\\", we wholeheartedly agree that this question should be addressed, and our framework of precisely and explicitly modeling the relationship (which is often not considered enough in our field, as you kindly pointed out) would serve as an important and necessary first step.\\n\\n---\\n\\n### **C5.3:** Critical Reflection - \\\"Trying to form groups again beyond sensitive attributes - again - introduces new sources of unfairness.\\\"\\n\\n**A5.3:** Thanks for the thoughtful question. There might be some misunderstandings and please allow us to clarify.\\n\\nWe are not trying to form new groups beyond sensitive attributes and treat them as if they were from a different group. Instead, we aim to explicitly model and address the different boosts/impediments to opportunities faced by different demographic groups, __when they are part of various contextual environments__. In other words, the influence from contextual environments are not individual-level attributes attached to the person, and they will change accordingly if an individual is subject to a different context.\\n\\n---\\n\\n### **C5.4:** Critical Reflection - \\\"[what if] 'poor' students growing up in 'rich' regions ... [or] 'rich' people renting temporarily to appear to be from a 'poor' region.\\\"\\n\\n**A5.4:** Thanks for trying to go further and consider potential adversarial behaviors. While the profile can appear to be different from the truth (e.g., the temporary rental at a different region), the underlying mechanism cannot be easily faked. If the student is attending a specific school, the school's influence on the student is not altered by where the student (temporarily) lives. Furthermore, the potential adversarial behavior can be modeled in dynamic settings as an extension of our framework, which is a natural direction for further research.\\n\\n---\\n\\n### **Q6:** \\\"The assumptions in the paper appear to be somewhat arbitrary. E.g., why assume the gamma parameterization in 4.3., and how is this justified?\\\"\\n\\n**A6:** Thanks for carefully considering the assumptions in theoretical analyses. The assumptions are not arbitrary. \\n\\nAccording to educational research (see, e.g., Arthur et al., 2019), the distribution of student scores is roughly bell-shaped but is often not perfectly Gaussian. The distribution tends to skew towards the low-score end, and the support is often bounded (e.g., falls in [Min, Max]). Therefore in Assumption 4.3, we use Gamma distributions to parameterize the score distribution. They are versatile to model the skewness and long-tail behaviors, while at the same time facilitate closed-form theoretical analyses. We have included this discussion in Appendix C.1.2, along with a side note `Re: Q6 by Reviewer g5W4` on page 25.\\n\\n---\\n\\n### **Q7:** \\\"What is the purpose of Section 2.1 for the contents of the paper?\\\"\\n\\n**A7:** Thanks for the careful reading and thoughtful question. Section 2.1 provides a brief introduction of causal modeling with a directed acyclic graph (DAG). Since in our framework we use a DAG to represent causal process, which is also utilized in previous causal fairness literature. For completeness, we introduce the DAG representation of causality together with our notation conventions in Section 2.1.\\n\\n---\\n\\n### **References**\\n\\nArthurs, N., Stenhaug, B., Karayev, S., & Piech, C. _Grades Are Not Normal: Improving Exam Score Models Using the Logit-Normal Distribution._ Proceedings of the 12th International Conference on Educational Data Mining, 2019.\\n\\nSowell, Thomas. _Affirmative Action Around the World: An Empirical Study._ Vol. 67. Yale University Press, 2004.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"This work explores incoporating the concept of social determinants of opportunity, variables that relate to an individual's academic success causally and potentially implicitly. The authors deviate from past work by modeling implicit relationships between these variables rather than simplified relationships and futher explore adding previously omitted variables. Then, framing academic preparedness as an optimization problem, the authors find a correlation between race and social determinants of opportunity, using GPA as an estimation of academic preparedness and analyzing data from the University of California's admissions data. As an analysis of existing data, this work proposes modeling protected characteristics and studying the influence of contexts and environments on the individual for fairness analysis.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The authors seek to model fairness in college admissions by disentangling variables that implicitly model each other, which provides an interesting framework for considering the intersectionality of factors that influence an individual. When applied to college admissions and academic preparedness, the authors provide a convincing argument for abstracting out social determinants of opportunity and studying the underlying framework and its impact on the individual. Further, the authors demonstrate various applications of their framework in Section 4 in studying historical admissions systems.\", \"weaknesses\": \"While there are limitations in studying observational data, this paper could have benefitted from a further analysis of the University of California dataset; the authors do acknowledge the limitations of summary statistics but further discussion of the dataset and analysis on other datasets could have provided more support to the experimental section of this paper. The authors briefly discuss their experimental findings but the three separate graphs in Figure 3 could have benefitted from further discussion, particularly in relation to each other and how the correlation between race and social discriminators of opportunity correlates with understanding academic preparedness in a region. Potentially interleaving the methods and providing experimental results for the modeling of past admissions systems could have provided more tractable examples of how this framework compares to prior work.\", \"questions\": \"How does modeling University of California admissions data via the presented framework differ experimentally from past methods? What are possible extensions of this framework in developing more holistic admissions systems? A theoretical analysis of what this kind of admissions system could look like could provide further insight into the extensions of this model.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Further Conversation with Reviewer 8F2j\", \"comment\": \"Thank Reviewer 8F2j for engaging in further conversation.\\n\\nThe additional analyses on US Census data are for the purpose of demonstrating how different regions instantiate very different contextual environments, and calling for attention to the _social determinants of opportunity_. We view our approach as a necessary and important first step to _raise the salience of geographic and community influences on opportunity over a reductive focus on ethnicity_, as you kindly pointed out.\\n\\nWe respectfully disagree with the comment that our approach is \\\"an additional binary variable and an conditional independence statement [..., and] in practice is a simple extension of previous work\\\":\\n\\n1. As demonstrated in the census data analysis, the region is **not just a binary variable**, but a surrogate to _social determinants of opportunity_ of various contextual environments (we explicitly mentioned that it can go beyond binary cases, e.g., at line 457). Our framework naturally incorporate better measurements of _social determinants of opportunity_, if they are available.\\n\\n2. We identify and aim to address the potential issues of previous causal fairness approaches, namely, the recapitulation of stereotypes, the limited scope of individual-level variables, and omitting relevant variables. In practice, addressing these issues involves intentionally looking for or developing better measurements of _social determinants of opportunity_, instead of causal effects originating from the protected feature.\\n\\nThanks again for sharing your thoughts. Please let us know if there is remaining question or concern on any specific point.\"}" ] }
4j9plQoOH1
LongViTU: Instruction Tuning for Long-Form Video Understanding
[ "Rujie Wu", "Xiaojian Ma", "Hai Ci", "Yue Fan", "Yuxuan Wang", "Haozhe Zhao", "Qing Li", "Yizhou Wang" ]
This paper presents LongViTU, a large-scale (~121k QA pairs, ~900h videos), automatically generated dataset for long-form video understanding. Our key idea is inspired by the success of Large Language Models (LLMs) and Multimodal Language Models (MLMs) that are fueled by machine-generated instruction-following data (*e.g.*, InstructGPT, LLaVA). We developed a *systematic* approach to produce massive question-answeringing pairs tailored to virtually unbounded long videos by organizing them into a ***hierarchical tree***, incorporating ***self-revision*** mechanisms to guarantee high quality. We curate LongViTU for each QA pair: 1) involves a long context (average *certificate length* of 4.6 minutes); 2) requires rich knowledge and condensed reasoning (commonsense, causality, planning, *etc.*); 3) explicit labels the timestamps of relevant events throughout the entire video. Furthermore, LongViTU provides a benchmark to facilitate future research in instruction-following for long-form videos. Our experiments first reveal the performance gap between open-source video MLMs and their commercial counterparts (*e.g.*, Gemini-1.5-Pro) on this benchmark. Supervised Fine-Tuning (SFT) on open-source models led to Video-LLaVA achieving the best performance, with a GPT-4 score of $50.7$, closely following $52.3$ by the leading closed-source model Gemini-1.5-Pro, underscoring the substantial challenge posed by our benchmark. Further SFT on LongViTU with Video-LLaVA resulted in improvements of $30.7$% on the In-Distribution (ID) benchmark EgoSchema; $12.9$% and $0.6$% on the Out-of-Distribution (OOD) benchmarks WorldQA and VideoMME, respectively. These outcomes demonstrate the effectiveness and robust OOD generalizability of our proposed instruction-tuning scheme for long-form video understanding. The dataset, SFT models, and code are publicly available on the anonymous page [LongViTU](https://longvitu.github.io).
[ "vision language models", "instruction-tuning", "long-form video understanding" ]
https://openreview.net/pdf?id=4j9plQoOH1
https://openreview.net/forum?id=4j9plQoOH1
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zkCyubB3OY", "xEELPzBxgs", "unj6rj7Dif", "jtgauTat7F", "D4x0ql92bi" ], "note_type": [ "official_review", "comment", "official_review", "official_review", "official_review" ], "note_created": [ 1730730837734, 1731485131793, 1730603941940, 1730625058143, 1730451400373 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission7604/Reviewer_KRZ6" ], [ "ICLR.cc/2025/Conference/Submission7604/Authors" ], [ "ICLR.cc/2025/Conference/Submission7604/Reviewer_fVx1" ], [ "ICLR.cc/2025/Conference/Submission7604/Reviewer_rfFt" ], [ "ICLR.cc/2025/Conference/Submission7604/Reviewer_sDiL" ] ], "structured_content_str": [ "{\"summary\": \"The paper introduces LongViTU for video understanding, which comprises approximately 121k question-answer pairs across 900 hours of video content, focusing on long-context videos that require rich knowledge and reasoning. The authors propose a hierarchical pipeline for generating high-quality QA pairs with explicit timestamp labels, catering to diverse real-world scenarios. LongViTU is curated to support fine-grained and open-ended QA. The paper also presents experiments demonstrating the performance gap between open-source and commercial models on this benchmark and the effectiveness of SFT on LongViTU.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The paper is easy to follow, and the experiments are clearly described.\", \"The dataset is of high quality, featuring a large number of QA pairs and encompassing a variety of diverse scenarios.\"], \"weaknesses\": [\"Figure 1: The icons, while visually appealing, come across as unprofessional and occupy space that could be better utilized to present more information.\", \"Ablation Studies: The paper lacks ablation studies for different-level captions. For instance, it would be beneficial to know if event-level captions can be skipped without significant detriment.\", \"Results: Additional results are necessary to clarify the performance of different Multi-modal Large Language Models (MLLMs) on LongViTU videos with varying durations.\", \"Comparison with ShareGPT4Video[1]: The authors of ShareGPT4Video present a progressive framework that generates detailed captions for diverse videos. In contrast, LongViTU focuses solely on ego-centric videos due to its dependence on human annotation, which potentially limits its application and robustness for general QA, as evidenced in Table 3.\", \"---\"], \"reference\": \"[1] Chen, Lin et al. \\u201cShareGPT4Video: Improving Video Understanding and Generation with Better Captions.\\u201d ArXiv abs/2406.04325 (2024): n. pag.\", \"questions\": \"See weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"The paper introduces LongViTU, a large-scale dataset designed for long-form video understanding, featuring approximately 121k question-answer pairs across 900 hours of video content. It addresses challenges in long-form video understanding by offering a dataset with diverse real-world scenarios, explicit timestamp labels, long certificate lengths, fine-grained categorization, and open-ended precise QA pairs. LongViTU is curated to facilitate instruction tuning for long-form videos, involving the organization of video content into a hierarchical tree and incorporating self-revision mechanisms to ensure high-quality QA pairs. The authors primarily validate the effectiveness of LongViTU through experiments conducted on two different models.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1.\\tThe approach of organizing video content into a hierarchical tree structure is innovative. This method allows for the generation of question-answer pairs that capture both spatial and temporal details, which is a creative extension of existing video understanding frameworks.\\n2.\\tThe dataset provides fine-grained categorization of questions, which is crucial for advancing the understanding of complex video content and adds depth to the quality of the dataset.\", \"weaknesses\": \"1.\\tIn Table 2, it can be observed that there is a lack of differentiation in the benchmark. The performance gap between the best-performing Gemini-1.5-Pro and the other models is not evident. According to the reviewer, in most existing benchmarks, Gemini-1.5-Pro demonstrates a significant performance advantage over Video-LLaVA.\\n\\n2.\\tThe proposed benchmark employs GPT-4 for assessment, which may introduce additional bias. \\n\\n3.\\tThe validation method employed was released some time ago, and its baseline performance is no longer highly competitive compared to more recent models. It remains unclear whether it can still deliver significant performance improvements on more recently proposed models.\", \"questions\": \"1.\\tIs it possible that using GPT-4 for evaluation may struggle to distinguish fine-grained semantics? For instance, if sentences differ by only one or two keywords but convey significantly different meanings, how would GPT-4 rate them in such cases?\\n\\n2.\\tCan LongViTU still deliver substantial performance improvements on models that perform better?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces LongViTU, a novel large-scale dataset (~121k QA pairs, ~900h videos) for long-form video understanding. The authors address the limitations of existing video question-answering (VQA) datasets by focusing on several key aspects: diverse real-world scenarios (leveraging Ego4D), explicit timestamp labels for QA-related events, long average certificate length (4.6 minutes), fine-grained categorization of QA pairs (spatiotemporal understanding, episodic reasoning, commonsense inference), and open-ended, precise QA generation. A hierarchical pipeline, employing LLMs (primarily GPT-4) at multiple stages (hierarchical video tree construction, long-form QA generation, self-revision), is used for automatic dataset creation. Experiments demonstrate the challenges posed by LongViTU to existing video language models (VLMs), showing a performance gap even between open-source and commercial models. Fine-tuning on LongViTU improves performance on both in-distribution and out-of-distribution benchmarks.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. LongViTU explicitly addresses the limitations of temporal context, length, and fine-grained question types from the perspective of sft. The hierarchical pipeline for automatic dataset generation is a sound procedure to create long-form annotations from bottom to top. Its sheer scale of the dataset (~900 hours of video) and its diversity in terms of scenarios and question types are decent. The use of Ego4D ensures real-world relevance.\\n2. The paper includes a thorough quantitative evaluation on LongViTU and several benchmark datasets, demonstrating the effectiveness of the dataset and highlighting the challenges it presents. The use of GPT-4 for scoring is a reasonable approach given the open-ended nature of the QA pairs. Qualitative examples further illustrate the dataset's capabilities. The availability of the dataset, fine-tuned models, and code is a valuable contribution to the community.\", \"weaknesses\": \"1. The reliance on LLMs (GPT-4) throughout the pipeline raises concerns about potential biases inherited from the pre-training data of these models. Moreover, a hierarchical pipeline may cause error cumulation, making the bias even worse. A thorough analysis of potential biases in the generated QA pairs is missing.\\n2. While self-revision is employed, a more robust human evaluation of the dataset quality would strengthen the paper's claims. The current human evaluation seems limited to Appendix B.\\n3. Experiments need improvements. The number of models evaluated in the benchmark is too limited, and some of the current long video large language models, such as LongVA, LongVILA, have not been included in the evaluation. The model performance used to validate the training dataset's effectiveness is too weak (for instance, LLama-VID performs below random chance on VideoMME), and the improvements achieved after fine-tuning are relatively minor.\", \"questions\": \"1. How were the specific parameters for the sliding window (five segments) determined? What is the sensitivity of the results to changes in this parameter?\\n2. What is the inter-annotator agreement (IAA) for the human annotations used in the Ego4D dataset, and how does this affect the quality of LongViTU?\\n3. What are the computational costs associated with generating and processing LongViTU?\\n4. Can you provide a more detailed analysis of the biases present in the generated QA pairs?\\n5. How does the performance of the fine-tuned models change with different sizes of the LongViTU training set?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"In this work, the authors propose a LongViTU benchmark for Long-Form Video Understanding. Basically, they leverage Ego4D as data source, and develop a three-stage pipeline for QA annotation and revision. First, it builds up a hierarchical video tree to describe videos in different temporal scales. Second, they apply a sliding window approach to any subtree, and generate QA of subtree by GPT4. Third, they use GPT-4 to make a thorough revision of the generated question-answering pairs.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1 Topic is good. Long-form video understanding is a challenging but important problem. Developping a benchmark for instruct tuning and evaluation is critical in this problem.\\n\\n2 Experiments are sufficient. The experimental studies are interesting to show the challenges and potentials of this benchmark.\", \"weaknesses\": \"1 This benchmark is based on EGO4D. Hence, the annotation would be similar to EgoTaskQA. As shown in Table 1, the difference is the increasing scale of data set and the newly-added timestep annotations. Is such timestep annotation important or not? Are there any expermental results to show its impact on your benchmark ?\\n\\n2 The hierarchical video tree style design is similar to [MoVQA: A Benchmark of Versatile Question-Answering for Long-Form Movie Understanding, arXiv:2312.04817]. \\n\\n3 The paper writing should be refined. The structure is OK, while the content is not quite easy to read.\", \"questions\": \"Please see weakness section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
4ikjWBs3tE
Transformers Learn Low Sensitivity Functions: Investigations and Implications
[ "Bhavya Vasudeva", "Deqing Fu", "Tianyi Zhou", "Elliott Kau", "Youqi Huang", "Vatsal Sharan" ]
Transformers achieve state-of-the-art accuracy and robustness across many tasks, but an understanding of their inductive biases and how those biases differ from other neural network architectures remains elusive. In this work, we identify the sensitivity of the model to token-wise random perturbations in the input as a unified metric which explains the inductive bias of transformers across different data modalities and distinguishes them from other architectures. We show that transformers have lower sensitivity than MLPs, CNNs, ConvMixers and LSTMs, across both vision and language tasks. We also show that this low-sensitivity bias has important implications: i) lower sensitivity correlates with improved robustness; it can also be used as an efficient intervention to further improve the robustness of transformers; ii) it corresponds to flatter minima in the loss landscape; and iii) it can serve as a progress measure for grokking. We support these findings with theoretical results showing (weak) spectral bias of transformers in the NTK regime, and improved robustness due to the lower sensitivity.
[ "transformers", "sensitivity", "grokking" ]
Accept (Poster)
https://openreview.net/pdf?id=4ikjWBs3tE
https://openreview.net/forum?id=4ikjWBs3tE
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yD6mYfcsgv", "xpONKB5HeT", "xMJ4n8kJIU", "kW9UDjjSCb", "iHJUm4e5wy", "cg6VPCJsUw", "a2Sisjmbpy", "UC6UHSM6RI", "QpthyEIZVp", "POxRwjDTKI", "Mr7ecOWxUT", "JYulT3zwV4", "IdEBIo8bYq", "FxmCPzeOGQ", "F9FGjDPl56", "ChL5DkE378", "BaYbBKcyjl", "971SiwmUYC", "4gkMzje6Jw", "4RET9VLNgp", "2XhL16odDK" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_review", "official_comment" ], "note_created": [ 1732490268947, 1732827493506, 1732258748487, 1732258962260, 1732828149854, 1737523432225, 1732723294224, 1732490599951, 1732823170515, 1734674785849, 1732259010646, 1730867181910, 1732833501989, 1732258987669, 1732822919392, 1730303882998, 1732258825636, 1730501668942, 1732258792919, 1731044641800, 1732489148830 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission1024/Reviewer_kdeM" ], [ "ICLR.cc/2025/Conference/Submission1024/Reviewer_eYcM" ], [ "ICLR.cc/2025/Conference/Submission1024/Authors" ], [ "ICLR.cc/2025/Conference/Submission1024/Authors" ], [ "ICLR.cc/2025/Conference/Submission1024/Reviewer_eYcM" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission1024/Reviewer_eYcM" ], [ "ICLR.cc/2025/Conference/Submission1024/Reviewer_A7Md" ], [ "ICLR.cc/2025/Conference/Submission1024/Authors" ], [ "ICLR.cc/2025/Conference/Submission1024/Area_Chair_3znp" ], [ "ICLR.cc/2025/Conference/Submission1024/Authors" ], [ "ICLR.cc/2025/Conference/Submission1024/Reviewer_A7Md" ], [ "ICLR.cc/2025/Conference/Submission1024/Authors" ], [ "ICLR.cc/2025/Conference/Submission1024/Authors" ], [ "ICLR.cc/2025/Conference/Submission1024/Authors" ], [ "ICLR.cc/2025/Conference/Submission1024/Reviewer_eYcM" ], [ "ICLR.cc/2025/Conference/Submission1024/Authors" ], [ "ICLR.cc/2025/Conference/Submission1024/Reviewer_kdeM" ], [ "ICLR.cc/2025/Conference/Submission1024/Authors" ], [ "ICLR.cc/2025/Conference/Submission1024/Reviewer_pCqz" ], [ "ICLR.cc/2025/Conference/Submission1024/Reviewer_kdeM" ] ], "structured_content_str": [ "{\"comment\": \"I have read the reviewer's concerns and the authors' response. Since this review has given a score that I found surprisingly low for this paper, I take the liberty to bring to the reviewer's attention that the authors seem to have addressed the reviewer's concerns. In particular, I agree with the authors that the noisy strategy is described and also that lines 154-157 clarify the implications of Proposition 2.1.\\n\\nRegarding the limitations of the experiments and the datasets, it is my understanding that this paper explores the sensitivity and inductive bias of transformers, and present rigorous theoretical arguments. The experiments seem tailored to the main aims of the paper. Respectfully, why does the reviewer find it necessary to provide additional experiments on more datasets and models, considering the main goal of the paper? Of course, more experiments on models and datasets with more practical relevance are helpful, but is lack of such experiments enough grounds for rejection? On the same note, doesn't the reviewer think this very same objection could hold for a large set of accepted papers at ICLR and other venues, many of which make valuable contributions?\"}", "{\"comment\": \"Thanks for your last message with an interesting discussion about the topics raised. I find the answers sensible, and I encourage the authors to include such comments in the final paper, in the form the authors find more reasonable.\\n\\nGiven the authors' engagement, grounded answers, and the much more clear focus of the paper and setup, I will update my score above acceptance.\"}", "{\"comment\": \"Thank you for the positive feedback and the helpful comments.\\n\\n\\n**W1**: Thank you for the suggestion, we have revised the sentence.\\n\\n\\n**Q1: How does sensitivity relate to robustness to adversarial/random perturbations?**\\n\\nThank you for the question. While low sensitivity (to token-wise perturbations) correlates with better robustness, they are not equivalent. Specifically, in App. A.2, we evaluate sensitivity to Gaussian noise added across the input and observe that this metric does not distinguish transformers from other architectures as clearly as the sensitivity to token-wise Gaussian perturbations.\"}", "{\"comment\": \"Thank you for the detailed comments and feedback to help improve our work. We hope that the following responses will address the reviewer\\u2019s concerns and would be happy to address further questions the reviewer may have.\\n\\n**Weaknesses:**\\n\\n**1: Regarding the scale of experiments**\\n\\n*a. \\u201cResults in Section 3 use a single attention layer model\\u201d*\\n\\nWe emphasize that the goal of Section 3 is to show that even in a very simple setting with a single-layer self-attention model, we can see the low sensitivity simplicity bias. While the dataset **can** be considered for analyzing more complex model architectures as well, we choose not to do so because a) it is not necessary to use a more complex model for this data and b) it is more interesting to analyze larger models on more realistic settings. Hence, we compare larger models on real-world vision and language datasets in Sections 4 and 5. That being said, the experiments in Section 3 provide some insight into what part of the model gives rise to the low sensitivity bias.\\n\\n*b. \\u201cExperiments in Sections 4 and 5 use small-scale datasets and models\\u201d*\\n\\nWe emphasize three points to address this concern.\\n\\nFirst, we respectfully disagree with the statement that \\u201cbold conclusions are extracted from a single model/dataset\\u201d. We compare several datasets and models to validate our claims. Specifically, to show that transformers learn lower sensitivity functions than CNNs, we consider two datasets as follows. \\n- We consider the CIFAR10 dataset and compare two CNN-based models, namely ResNet18 and DenseNet12, with a ConvMixer model and and two ViT models, and\\n- We compare a CNN (ResNet18) and a ViT on the SVHN dataset.\\n\\nNext, to show that transformers learn lower sensitivity functions than MLPs,\\n- we consider the FashionMNIST dataset and compare a ViT, a simpler 3-layer CNN and MLP-based models with two activation functions. \\n- we also consider a binary classification task with the MNIST dataset and compare a ViT and an MLP. \\n\\nWe consider simpler models and datasets for comparison with MLPs because we expect these models to perform reasonably well on these datasets.\\n\\nSimilarly, for comparisons with LSTMs, we consider two datasets MRPC and QQP, and compare with two language models, namely RoBERTa (in the main body) and GPT-2 (in Fig. 20 the Appendix, as mentioned in line 401 in the paper). \\n\\nSecond, we note that we focus on relatively simpler datasets and models because we can train the models from scratch in these settings to get reasonably good performance. This is important because we compare the sensitivity of models that have comparable and reasonably good train accuracy. We want to single out the effect of the architecture, without any confounders such as the choice of the optimization algorithm or data augmentation or use of any pretraining strategies, which might vary for different architectures to get good accuracy. \\n\\nThat being said, we compare (pre-trained) ConvNeXT and VIT-B/16 models on ImageNet-1K dataset to show that the conclusions are relatively robust to pretraining and increasing scale. However, a systematic and fairer comparison for large-scale models would warrant better control over the pretraining strategies, which is beyond the scope of this work. \\n\\nThird, we acknowledge the reviewer\\u2019s concern about the claims not directly transferring to LLMs. However, we emphasize that our work aims to give insights about the inductive biases and other properties of the transformer architecture in general, and we don\\u2019t claim that these insights transfer directly to LLMs or about the interactions with pretraining and finetuning. That being said, comparisons with GPT-2 model, which is a causal model and more similar to recently used language models compared to RoBERTa, lead to the same conclusions and indicate that we can expect them to also hold for other language models. \\n\\n\\n**2: Clarity**\\n\\n*a. \\u201cIt is not clear how noising is applied to image data\\u201d*\\n\\nFig.1 is exactly what is done for images \\u2013 adding noise to one patch at a time. Since noise is at the patch level, the process is the same for CNNs and ViTs.\\n\\n*b. \\u201cIt is hard to understand the importance of Proposition 2.1\\u201d*\\n\\nAs stated in lines 154-157 in the paper, \\u201cLarger eigenvalues for lower-order monomials indicate that simpler features are learned faster. Since low sensitivity implies learning low-degree polynomials, Proposition 2.1 also implies a weak form of low sensitivity bias.\\u201d\"}", "{\"title\": \"Answer to reviewer kdeM in the review thread\", \"comment\": \"I appreciate your enthusiasm about this work being accepted. However, I had several concerns that required clarification, which is common during a review process, since we all have different opinions and reviewing critera.\\n\\nI would be happy to discuss this further during the reviewer-reviewer discussion period if needed.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"Answer to rebuttal\", \"comment\": \"I would like to thank the authors for the details provided in their rebuttal. Most of my questions have been resolved after reading the rebuttal:\\n\\n* The experiment and discussion about $\\\\sigma$ showing that _\\\"even though the sensitivity values are different for different $\\\\sigma$ (as expected), the conclusion that transformers learn lower sensitivity functions than CNNs is robust to the value of $\\\\sigma$\\\"_ is relevant.\\n\\n* The discussion and clarification about noise being added at pixel level for images, and after LN for text. This detail is important and I might have missed it during my initial review, assuming that _token_ referred to some representation of a piece of data (either image or text). In such case, I agree in that ranges are consistent across images (or text if LN is used) and that a fixed noise level is a reasonable choice.\\n * I wonder how your approach works if applied on LN of image models? I don't see why it would not work, but it might be interesting to show for consistency with the text modality.\\n\\n* The overall response to the _simplicity_ of the model in Section 3, summarized as _\\\"the goal of Section 3 is to show that even in a very simple setting with a single-layer self-attention model, we can see the low sensitivity simplicity bias\\\"_. I believe this could be stated upfront in that section, so the reader understands the focus. As it is now, in L172 it is stated _\\\"In order to investigate the inductive biases in real-world image and language tasks, we need an equivalent metric for high-dimensional, real-valued data\\\"_. I suggest explaining that a simple setup as proof-of-concept will be provided in Sec 3.1, and that real-world scenarios will be provided in Sec 4, 5.\\n\\n* I believe the overall focus of the paper could be emphasized, as the authors have done in their answer. For example, the comment: _\\\"we emphasize that our work aims to give insights about the inductive biases and other properties of the transformer architecture in general, and we don\\u2019t claim that these insights transfer directly to LLMs or about the interactions with pretraining and finetuning.\\\"_ is a good example. Some direct statement like this could also help the reader to understand the contributions and limitations of this work. \\n\\n**Still some questions:**\\n\\n* Noising strategy:\\n\\n> _Since noise is at the patch level, the process is the same for CNNs and ViTs._\\n\\nIf I understand correctly, for CNNs, noise is added on an image patch (a bounding box within the full image). However, CNNs do not consume patches, but rather _scan_ the whole image with convolutional kernels. In my opinion, adding Gaussian noise to a specific part of the image has not the same effect as adding noise to a ViT token. Could the authors comment on that aspect?\\nWhy was pixel noising preferred over noising at LN level as done in the text case?\\n\\n* Connection with [1, 2]. The comment provided in my review:\\n\\n> _CNNs tend to learn from the easiest cues available. In the experiments in Section 3.1, these \\\"easy\\\" cues would be the sparse tokens. Once they become uninformative, the next available (but harder) cue are the frequent tokens._\\n\\nis something that I think should be addressed, since the connection with [1, 2] might reduce novelty. This is somehow related with the concern raised by Reviewer A7Md about the connection with works relating robustness and augmentations.\\n\\n> Geirhos, Robert, et al. \\\"Shortcut learning in deep neural networks.\\\" Nature Machine Intelligence 2.11 (2020): 665-673.\\n\\n> Geirhos, Robert, et al. \\\"ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness.\\\" arXiv preprint arXiv:1811.12231 (2018).\\n\\n**Overall comment:**\\n\\nThe authors have provided thorough responses and clarification that have improved my confidence in this work, as well as in the experimental setup. However, there are some aspects I consider worth clarifying before making a decision about acceptance.\"}", "{\"comment\": \"Thank you for your response. I appreciate the authors revising the claim about explanation. My other concerns remain, so I will keep my score.\"}", "{\"comment\": \"We thank the reviewer for discussing with us and we are glad that most of the reviewer\\u2019s concerns were resolved. We would like to further address the reviewer's questions and comments as follows.\\n\\n**Why add noise to image patches?**\\n\\nTo ensure fair comparisons of sensitivity between CNNs and ViTs, we want to ensure that the input images are corrupted in the same way and to adhere to the definition of sensitivity, only some local tokens are corrupted. This is why we chose to corrupt image patches with Gaussian noise. Although CNNs are designed to convolve the images, their convolution kernels are usually small and local, for example, the kernel size of ResNet is only 3x3. In this sense, at early layers, CNNs still process local pixels and this will further contribute to their overall sensitivity.\\n\\n\\n**Why not add noise after LayerNorm?** \\n\\nGood question! The goal of our experiments is to compare the sensitivity of ViT with that of many architectures such as ResNet, DenseNet, MLP, and ConvMixers. Unfortunately, most of these architectures use BatchNorm instead of LayerNorms. It would be nice if we modify their architectures to LayerNorms but that seems beyond the scope of the main message of this paper.\\n\\n\\n**Connection to prior work**\\n\\nWe agree with the reviewer that there are connections with [1,2]. We discuss related work on simplicity bias in deep learning in lines 1449-1460 in the Appendix (due to space constraints). It is evident from prior work that the notion of simplicity can vary from one architecture to another while characterizing the simplicity bias. For instance, CNNs trained for object recognition tasks have been found to rely on texture rather than shape to make predictions, while MLPs (specifically, 1 hidden-layer NNs) have been found to rely on a lower-dimensional projection of the input data to make predictions. \\n\\nOne of the contributions of our work is to identify the metric that can distinguish between transformers and other architectures. Our work identifies low sensitivity as a notion of simplicity bias for transformers, that is observed systematically across various settings. We also note that it has several useful properties, as discussed in the paper, like being a natural analog of the notion of sensitivity used in Boolean function analysis, being predictive of properties like robustness and generalization, and serving as a progress measure for grokking. \\n\\n\\nAgain, we appreciate the reviewer\\u2019s time and engagement in the discussion and we would be happy to answer any further questions or concerns.\"}", "{\"metareview\": \"This paper extends the original notion of sensitivity in Boolean function analysis to an empirically measurable quantity and study it on transformers. The main conclusions contain three points, including lower sensitivity correlates well with robustness (this is almost by definition), flat minima, and grokking. The use of sensitivity to understand transformers is not new, e.g., Bhattamishra et al., and the main novel points are its correlations with flat minima and grokking.\\n\\nDuring the rebuttal the reviewers have pointed out a few over-claims which have been addressed by the authors. There are other concerns regarding the scale of the experiments that have not been properly addressed. I also agree with Reviewer eYcM that more extensive experiments on larger-scale models/datasets could be helpful to at least showcase the generality of the claims given that the main focus of this paper is on the empirical phenomenon rather than theoretical contributions. \\n\\nOverall, the empirical connection between sensitivity to flat minima and grokking is interesting and might lead to follow-up work. I recommend acceptance.\", \"additional_comments_on_reviewer_discussion\": \"3 out of 4 reviewers are enthusiastic about the paper, including one of the reviewers who initially rated negatively of the paper. The last reviewer is an expert reviewer whose comments help to better position the contributions of this work. Overall I feel it's an interesting phenomenon worth sharing with the community and can potentially lead to some follow-up work, hence I recommend acceptance, but as a poster rather than spotlight or oral given the limitation on experiments.\"}", "{\"comment\": \"We sincerely thank the reviewers for their time and effort in reviewing our work. We are encouraged that the reviewers find the topic of understanding transformers important (kdeM, eYcM), the use of sensitivity novel/original (A7Md, kdeM, eYcM), the finding that transformers learn low sensitivity functions interesting (pCqz), the writing clear and easy to follow (kdeM, eYcM). The reviewers appreciate our experiment design (eYcM) as well as the comprehensiveness and consistency of the results (pCqz, A7Md, kdeM), which demonstrate that transformers have lower sensitivity compared to other architectures across both synthetic and realistic vision and language datasets. They also find the theoretical results interesting (kdeM) and the connections between sensitivity and other phenomena like robustness and grokking new (pCqz, A7Md) and important (kdeM).\\n\\nWe have addressed the comments in the responses to each reviewer and will incorporate all feedback into the paper.\"}", "{\"summary\": \"This work builds on the theoretical notion of boolean sensitivity, extending it to an empirically measurable quantity and studying it for the case of transformers. It finds that transformers have lower input sensitivity on the training data, compared to other architectures, and that this is correlated with other phenomena such as test-time robustness, sharpness of minima, and grokking.\", \"soundness\": \"1\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The study in the paper is quite intriguing. A few things I liked:\", \"Provides a new lens on what is different about transformers\", \"Demonstrates phenomena consistently across many datasets\", \"Provides a new lens on grokking not captured by the weight norm\"], \"weaknesses\": [\"There were a few key places where I felt the paper overclaimed or made dubious claims, which are enough for me to not favor acceptance. In particular:\", \"Lower sensitivity leads to robustness: this is basically a restatement of the claim that Gaussian data augmentation improves robustness. This is a very well-known result, the authors do say that it is in line with other results in the literature, but I feel they are understating the extent to which this is well-trodden ground (for instance, Hendrycks, one of the authors of CIFAR-10-C, has published an entire line of work on data augmentation methods for improving robustness; Gaussian noise is the simplest of these and many others work far better).\", \"Perhaps more importantly, this sentence does not seem merited: \\\"Together, these results indicate that the inductive bias of transformers to learn functions of lower sensitivity *explains* the improved robustness (to common corruptions) compared to CNNs.\\\" I am not sure what \\\"explains\\\" means, but there are many other interventions that improve robustness (such as the data augmentation methods mentioned above), and some of those might have better explanatory power.\", \"It is not entirely clear whether input sensitivity is a *different* phenomena than test-time robustness to perturbations. The main difference is it is computed on the training set instead of the test set --- but are there cases where these come apart, or is test-time and train-time input sensitivity always highly correlated?\", \"I think the results could be interesting either way -- but if they are the same, then this is interesting mainly because it is a proxy for robustness that can be computed at training time; if they are different, then understanding the differences would be interesting.\"], \"questions\": \"Have you compared input sensitivity and perturbation robustness at test time? When if ever do they behave differently?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your thoughtful feedback and for updating your score in support of our paper. We appreciate your time and effort and will incorporate your suggestions into the final version.\"}", "{\"comment\": \"**Questions:**\\n\\n**Q1:** Please see the response to weakness 2a.\\n\\n**Q2: How does the variance $\\\\sigma$ impact the results?**\\n\\nThank you for the question. We evaluate the sensitivity values of the five models compared on the CIFAR10 dataset at the end of training using different values of $\\\\sigma$. We added the results in Appendix A.3 and we see that even though the sensitivity values are different for different $\\\\sigma$ (as expected), the conclusion that transformers learn lower sensitivity functions than CNNs is robust to the value of $\\\\sigma$. Appendix A.4 also has results for the QQP dataset with a different value of $\\\\sigma$ than considered in the paper.\\n\\n**Q3: Using fixed noise level for different tokens.**\\n\\nWe believe that token norms are comparable and thus, it makes sense to use a fixed noise level. Specifically, noise is added at patch level for images, and as mentioned in line 377 in the paper, for language tasks, noise is added after layer normalization.\\n\\n**Q4: Attention architecture considered in Section 3.1**\\n\\nThe model considered in Section 3.1 is a standard attention layer with key, query, and value weights $W_K, W_Q, W_V$, composed with a linear decoder $U$. Please also see the response to weakness 1a.\\n\\n**Q5: Does test accuracy correlate with sensitivity?**\\n\\nFig 13 in the Appendix shows the test accuracies for the models considered in Fig. 4 on the CIFAR-10 dataset. Although the ViTs have a slightly higher test accuracy, it is comparable across the five models. \\n\\n**Q6: How does model scale affect the difference between sensitivity values?**\\n\\nThis is an interesting question. However, as mentioned in the response to weakness 1b, comparing larger models is challenging because we also have to control the pretraining data and strategies for a fair comparison. \\n\\n**Q7: \\u201cSection 5 only considers the RoBERTa model. Do the claims hold for more recent language models?\\u201d**\\n\\nAs mentioned in line 401 in the paper, we include the results with GPT-2 in the Appendix which lead to the same conclusions. Please also see the response to weakness 1b.\"}", "{\"comment\": \"We sincerely thank the reviewer for their positive feedback and suggestions that helped improve our work. We greatly appreciate your support and encouragement.\"}", "{\"summary\": \"This work studies the sentitivity of functions defined by different deep learning architectures, comparing the specific case of Transformers with CNNs and Mixers. The work stems from previous work tha has studied sensitivity with Boolean inputs, and derives a formulation for token-based models. The authors make a connection between sensitivity and robustness, show how ViTs are less sensitive than other architectures and also show how sensitivity can be used for grokking analysis.\\nExperiments on synthetic data are provided, as well as experiments using ViT on small datasets (CIFAR, SVHN, ImageNet) and LLMs on 2 datasets (MRPC and QQP).\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"**Originality:**\\n\\nFocusing on sensitivity starting from the Boolean formulation is original. I also found the experiment on a synthetic vocabulary (3.1) original.\\n\\n**Clarity:**\\n\\nThe paper is well written, with clear language. The mathematical notation and formulation is also easy to read.\\n\\n**Significance:**\\n\\nThe study of sensitivity in current models is important for interpretability as well as to design better training strategies.\", \"weaknesses\": \"**Originality:**\\n\\nWhile sensitivity study has its originality, many previous works have studied sensitivity in many ways, for example by understanding the effects of image augmentations (see contrastive learning literature).\\n\\n**Quality:**\\n\\nThe experiments provided are either synthetic or use small models/datasets. This makes the claims in the paper weaker in my opinion. For example:\\n* Results in Section 3 use synthetic data and a single attention layer. I would argue that, while still interesting, these experiments might not transfer to full models with several layers and multiple attention heads.\\n * Related to this experiment, other research has been carried out analyzing spurious correlations. For example, the work by Robert Geirhos (among others) has already shown that CNNs tend to learn from the easiest cues available. In the experiments in Section 3.1, these \\\"easy\\\" cues would be the sparse tokens. Once they become uninformative, the next available (but harder) cue are the frequent tokens. \\n\\n> Geirhos, Robert, et al. \\\"Shortcut learning in deep neural networks.\\\" Nature Machine Intelligence 2.11 (2020): 665-673.\\n\\n> Geirhos, Robert, et al. \\\"ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness.\\\" arXiv preprint arXiv:1811.12231 (2018).\\n\\n* Results in Section 4 use small datasets (CIFAR, SVHN) and arguably a medium size dataset nowadays (ImageNet). The models used (Vit-simple/small) are far from real scenarios nowadays, and the compared architectures are also small (3-layer CNN in for example).\\n\\n* Results in Section 5 use a Roberta model (2019) which does not have the same properties as current LLMs. Also, this model is trained from scratch on small tasks, which also does not transfer to current abilities of LLMs.\\n\\nIn several cases, bold conclusions are extracted from a single model / single dataset experiment, with which I cannot agree. For example, the claim in L357 *_\\\"Thus, transformers learn lower sensitivity functions compared to MLPs, ConvMixers, and CNNs\\\"_* is validated with a 3-layer CNN on a small dataset like SVHN.\\n\\n**Clarity:**\\n\\n* It is not clear how the noising strategy is performed. The text mentions that _tokens_ are polluted with noise, however Fig 1 shows the noise applied to the pixel patch and says *_\\\" the original image and the corrupted image are fed into the same neural network\\\"_* (which implies that noise is applied at pixel level). The authors should clarify this important aspect.\\n\\n* It is also not clear how noising is applied to CNNs (which are not patch/token based).\\n\\n* Proposition 2.1 is harder to parse than the rest of the text, and it is hard to understand why it is important for the paper.\\n\\n**Significance:**\\n\\nWhile the objective of the paper is significant, the results provided and the size of the experiments laregly diminish the impact of this work.\", \"questions\": [\"Following up on my previous comment, the authors should clarify if the noising procedure is applied on patches (pixels) or token representations. Fig. 1 contradicts the text.\", \"Also, how is noising applied on CNNs?\", \"How is $\\\\sigma$ important, and why were different $\\\\sigma$ chosen for the experiments in Section 4. I personally find it a drawback that one needs to find a right $\\\\sigma$, and that using different ones the conclusions might change. Also, bold claims are provided with a single (different) sigma per dataset, which raises some questions.\", \"ViT/LLMs might produce different token scales, but $\\\\sigma$ is kept fixed. This can impact strongly some tokens and leave others almost noise-less. I find this also a negative point of this algorithm, since some \\\"topics\\\" might by-pass the noising.\", \"How is a single attention representative of a large Transfomer in Section 3.1. I would ask the authors to elaborate on this.\", \"Additionally, why have a linear layer $U$ after another linear layer $W_v$, since the composition of both is already a linear layer?\", \"In Fig. 4, only the training accuracy is provided. What about the test accuracies? It is known that models achieve perfect train accuracy, but the test accuracy might be very different though. Does the test accuracy correlate with the sensitivity (measured on train data as you already do)?\", \"About the claim in L347 *_\\\"This shows that the observations on small-scale models studied in this section transfer to large-scale pretrained models.\\\"_*. By increasing scale, the sensitivity of a conv model has gone down to 0.0342, which is much lower than 0.0829 for ResNet-18. Also, ViT went up from 0.0014 to 0.0191. It would be fair to conclude that scaling up brings sensitivities closer, which would mean that small-scale does not transfer to large-scale. Also, one could go much larger in scale (larger ViT, larger datasets) and see if the trend is still maintained or sensitivities are even closer.\", \"Claims in Section 5 are obtained with one Language model (Roberta) and one LSTM on 2 small datasets. I cannot agree with the claims being generic for *Language Tasks* with this setup. Moreover knowing that current LLMs have much different properties than LMs used in 2019 (Roberta).\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for the positive feedback and the helpful comments.\\n\\n\\n**W1, Q1: How do sensitivity and robustness relate to the tradeoff between robustness and accuracy observed in different contexts?**\\n\\nThank you for the question. We note that these tradeoffs occur when measuring robustness to adversarial perturbations or out-of-distribution (OOD) data. In this paper, we focus on benign shifts to measure robustness and hence, don\\u2019t observe this type of a tradeoff.\\n\\n\\n**W2-4**: Thank you for the suggestions to improve the presentation of the paper. We have made some edits to the introduction as suggested. We will also incorporate the other suggestions in the final version.\\n\\n\\n**Q2: Relation between generalization in representation learning to generalization in sensitivity**\\n\\nThis is an interesting question. We believe that under benign shifts, such as on new samples from the same distribution as the train set, or under small random corruptions, most properties should generalize. At the very least, the conclusions drawn based on the metrics should generalize, even if the values change. Exploring the generalization of sensitivity systematically can be an interesting direction for future work. \\n\\n\\n**Q3: Comparing LSTM and RoBERTa with the same number of parameters.**\\n\\nAs mentioned in the paper, we compare models that have the same accuracy to ensure the comparison of sensitivity values is fair. Since both the models attain very similar accuracy, they can potentially learn similar functions that could attain similar sensitivity values. However, we observe that they learn functions that have similar accuracy but differ significantly in terms of sensitivity.\"}", "{\"summary\": \"The paper explores the inductive biases of transformers, particularly their tendency to learn low-sensitivity functions. It introduces sensitivity as a measure of how model predictions respond to token-wise perturbations in the input. By comparing transformers to other architectures like MLPs, CNNs, ConvMixers, and LSTMs across both vision and language tasks, the paper shows that transformers consistently exhibit lower sensitivity. This low-sensitivity bias is linked to improved robustness, flatter minima in the loss landscape, and hence better generalization. Additionally, the authors propose that sensitivity could act as a progress measure in training, and is linked to grokking.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": [\"**Key strength:** In addition to the general importance of developing rigorous understanding of how transformers work and why they show such remarkable properties, this paper proposes a novel perspective by looking into sensitivity. They rigorously define sensitivity and provide strong arguments on how it links to other important properties, such as robustness and generalization. They also show that it can track progress when grokking happens, which I think is an important finding and could potentially enable a series of future studies on grokking.\", \"**Other strengths:** Here are a list of other points that I commend the authors for:\", \"The introduction is quite well written and motivates the main question quite well (thought it could be improved; see weaknesses). Similarly, the contributions are well explained at the end of the introduction.\", \"The presentation of the paper is strong, and maintains a good balance between accessibility and rigor.\", \"Propositions 2.1. And 2.2 are really interesting results on the spectral bias and sensitivity of transformers.\", \"The authors explain the implications of their theory quite well.\", \"The experimental design is thorough and well-tailored to validating the theory.\", \"While I consider this a theoretical paper, the experiments are quite strong and cover various aspects of the paper\\u2019s main questions.\"], \"weaknesses\": \"I do not see any major weakness. But there could be some improvements. See my suggestions for improvement, below.\\n1. While the paper clearly explains that lower sensitivity is linked to higher robustness, trade-off/connection with expressivity and performance are not discussed. There is a well-established trade-off in various contexts (see, e.g., [1-2]), and it would further strengthen the paper to discuss this.\\n\\n2. Though I think the introduction is quite well-written, I think it under-emphasizes the findings of the paper on the role of sensitivity analysis. The authors conduct a rigorous analysis of the transformers sensitivity and use that to clarifies some of the important properties of transformers as I mentioned for the strengths, but while doing so, they also show, quite rigorously with strong theory and experiments, how sensitivity analysis could be used to understand generalization, grokking, etc. Near the end of the paper this realization caught my attention, and the authors actually do point this out more clearly in the Conclusion, but I think this can be better emphasized in the Introduction.\\n\\n3. I suggest the authors bring the Limitation section from the appendix to the main paper. The limitations are not discussed in the main paper, while it is always important to discuss them.\\n\\n4. This is a rather minor point and it might be a matter of taste: Do sections 5 and 6 really need to be separate sections? It seems like the findings are generally similar, and they could be merged in one section of empirical analysis of vision and language models.\\n\\n\\n**References**\\n\\n[1] Zhang, H., Yu, Y., Jiao, J., Xing, E., El Ghaoui, L., & Jordan, M. (2019, May). Theoretically principled trade-off between robustness and accuracy. In International conference on machine learning (pp. 7472-7482). PMLR.\\n\\n[2] Raghunathan, A., Xie, S. M., Yang, F., Duchi, J., & Liang, P. (2020). Understanding and mitigating the tradeoff between robustness and accuracy. arXiv preprint arXiv:2002.10716.\", \"questions\": \"1. Related to weakness 1, how do you think the sensitivity and robustness relate to expressivity and performance in transformers?\\n\\n2. Lines 310-311 mention generalization capabilities of different models as a reason to investigate sensitivity during training. This made me curious, what do you think the connection between generalizability in representation learning or classification relates to generalizability in sensitivity (I think one direction of it is clear, but the other direction is not)?\\n\\n3. In line 394, you mention that use the same number of layers for LSTM and RoBERTa for fair comparison. How about the model size in terms of number of parameters? How many parameters in each model? And how do you think changing this could impact your results?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"No ethics concerns.\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for the helpful comments and feedback.\\n\\n\\n**W1-2: Regarding the claim that low sensitivity explains the better robustness of transformers.** \\n\\nFollowing the reviewer\\u2019s suggestion, we have rephrased the aforementioned statement in the paper as follows: \\u201cAs encouraging lower sensitivity improves robustness, the inductive bias of transformers to learn functions of lower sensitivity could explain their better robustness \\n(to common corruptions) compared to CNNs.\\u201d \\n\\nAs the reviewer mentioned, prior works have shown that transformers are more robust in CNNs. We note that we discuss related work on robustness and data augmentation in lines 1467-1479 and lines 1489-1499, respectively in the Appendix. However, discussing the difference in the robustness of transformers and CNNs in more detail helps us elucidate the connection between lower sensitivity and better robustness. \\n\\nWe summarize our results from Section 6.1 below and hope our response and the revised statement address the reviewer\\u2019s concern about the section. \\n\\nIn Sections 4 and 5, we showed that transformers have lower sensitivity compared to other architectures, and in Section 6.1, we investigate the role of lower sensitivity in the improved robustness of transformers. Using the CIFAR-10-C dataset, we (a) observe that transformers have better robustness compared to CNNs, and (b) show that encouraging lower sensitivity while training the transformer further improves the robustness. Here, we emphasize three points.\\n\\nFirst, transformers exhibit a bias towards learning low-sensitivity functions even when trained without explicit regularization, as shown in the experiments in Sections 4 and 5.\\n\\nSecond, our goal while training with data augmentation (with Gaussian noise added to the images randomly) and sensitivity regularization (with patch-wise Gaussian noise added to the images) is to see if we can show that reducing sensitivity leads to improved robustness. These methods seem like the simplest ways to encourage low sensitivity. We welcome any suggestions the reviewer may have about other ways we can encourage lower sensitivity, and will try our best to test those. \\n\\nThird, while one may expect that encouraging lower sensitivity while training would improve robustness to noise corruption, we observe improved robustness to other corruptions as well. For instance, corruptions from the blur, weather and digital transform categories are significantly different from the noise corruptions as they cannot be implemented by making small pixel-wise changes to the image. The improved robustness across these perturbations suggests that it could be a consequence of the lower sensitivity.\\n\\nIn other words, while the definition of sensitivity is slightly similar to the corruptions from the noise category, it is quite different from the other corruptions and still correlates well with the improved performance on those. \\n\\n\\n**W3, Q1: Comparing (train) sensitivity to perturbation robustness at test time.**\\n\\nAs suggested by the reviewer, there could be connections between sensitivity and robustness to random perturbations, however, sensitivity as a metric not only serves as a measure of inductive bias that distinguishes transformers from other architectures but also has important implications, as a measure of robustness and flatness of the minimum and as a progress measure for grokking. \\n\\nFor instance, in App. A.2, we evaluate sensitivity to Gaussian noise added across the input and observe that while transformers have lower sensitivity (or better robustness), this metric does not distinguish transformers from other architectures as clearly as the sensitivity to token-wise Gaussian perturbations. Similarly, some of the noise corruptions considered in the experiments in Section 6.1 also indicate that lower sensitivity is correlated with better robustness. \\n\\nThat being said since we consider sensitivity a notion of inductive bias, it makes sense to measure it on the train test, analogous to other metrics of inductive bias such as maximum $\\\\ell_2$-margin for linear predictors on separable data. This allows leveraging a property of the train data to predict things like generalization on the test data.\"}", "{\"summary\": \"The paper considers implicit biases for the transformer architecture. They describe sensitivity of a function as the change in the function value averaged over all possible element-wise changes to the function input, averaged or maxed over all inputs on a hypercube. Such functions can be described using polynomials with the sensitivity connected to the degree of the polynomial. The paper proves that a linear attention transformer is biased (in the eigenvalue of the NTK sense) toward low-sensitivity functions characterized by the degree. Then they go on to generalize the notion of sensitivity for neighborhoods of general (non-boolean) inputs.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"Useful formalization of sensitivity\", \"Interesting findings about low sensitivity, robustness, and sensitivity to different parts of the input (like last token in a sequence)\", \"variety of tasks for better understanding of where architecture properties come from\", \"connecting between grokking and sensitivity provides a new lens into understanding and improving DNN training.\"], \"weaknesses\": [\"On line 141, \\\"where the eigenvalues are non-decreasing with the degree of the multi-linear monomials\\\" would be easier if it said \\\"eigenvalues do not decrease as the degree increases.\\\"\"], \"questions\": [\"What's the difference between sensitivity and adversarial/robustness that looks at neighborhoods?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you addressing my concerns and answering my questions. I maintain my score and I would like to take the chance to again commend the authors for the merits of their research and writing this strong paper. This is strong paper, my assessment is that the quality of this paper is substantially higher than a typical ICLR paper (based on those I have read from previous years, of course), and **I strongly recommend acceptance of this paper for publication at ICLR**.\"}" ] }
4ihkxIeTFH
FAdam: Adam is a natural gradient optimizer using diagonal empirical Fisher information
[ "Dongseong Hwang" ]
This paper establishes a mathematical foundation for the Adam optimizer, elucidating its connection to natural gradient descent through Riemannian and information geometry. We rigorously analyze the diagonal empirical Fisher information matrix (FIM) in Adam, clarifying all detailed approximations and advocating for the use of log probability functions as loss, which should be based on discrete distributions, due to the limitations of empirical FIM. Our analysis uncovers flaws in the original Adam algorithm, leading to proposed corrections such as enhanced momentum calculations, adjusted bias corrections, and gradient clipping. We refine the weight decay term based on our theoretical framework. Our modified algorithm, Fisher Adam (FAdam), demonstrates superior performance across diverse domains including LLM, ASR, and VQ-VAE, achieving SoTA results in ASR.
[ "Optimizer", "Adam", "Natural gradient descent", "Second order optimization", "Information geometry", "Riemannian geometry", "Differential geometry", "Tensor calculus", "Deep learning", "Fisher Information", "Hessian", "Curvature" ]
https://openreview.net/pdf?id=4ihkxIeTFH
https://openreview.net/forum?id=4ihkxIeTFH
ICLR.cc/2025/Conference
2025
{ "note_id": [ "cmPEzJg5Q7", "bszfdJbm9M", "QjILJDg6So", "IWqMc78IrH", "BKRK4yOAUu", "7wmsifls8M" ], "note_type": [ "official_review", "official_review", "official_review", "official_review", "comment", "official_review" ], "note_created": [ 1730901219745, 1731210262415, 1729537701426, 1730651127136, 1732731211347, 1730754573247 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission7668/Reviewer_ZZV1" ], [ "ICLR.cc/2025/Conference/Submission7668/Reviewer_2Fyw" ], [ "ICLR.cc/2025/Conference/Submission7668/Reviewer_FbYu" ], [ "ICLR.cc/2025/Conference/Submission7668/Reviewer_PpE9" ], [ "ICLR.cc/2025/Conference/Submission7668/Authors" ], [ "ICLR.cc/2025/Conference/Submission7668/Reviewer_yRgP" ] ], "structured_content_str": [ "{\"summary\": \"The paper reiterates and expands the motivation of Adam as approximate natural gradient descent. It derives multiple modifications to the Adam algorithm based on that interpretation. The resulting method (FAdam) is evaluated experimentally.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The argument relating decoupled weight decay to the information-geometric interpretation is interesting. It clarifies that the gradients used to compute $v$ (the diagonal empirical FIM) must be gradients of the log likelihood of a probabilistic model to match the definition of the FIM and therefore must not contain regularizers or auxiliary losses.\", \"Averaging the preconditioned gradients (versus preconditioning the averaged gradient) is an interesting variant.\"], \"weaknesses\": [\"The paper presents as an original finding that it \\\"establishes a mathematical foundation for the Adam optimizer\\\" in terms of NGD with the empirical Fisher information matrix. This is misleading. This motivation has been given in the original Adam paper and has since been discussed and critiqued in various papers, including but not limited to Kunstner et al. (2019). This should be made transparent in the discussion of related work.\", \"The paper states that \\\"for using natural gradient optimizers [...] the loss function must be in the form of the log-likelihood\\\". This is not a factual statement and should be adjusted. Preconditioning with the Fisher information matrix adapts to the geometry induced by a certain probabilistic model. The negative log likelihood under said model may be a \\\"natural\\\" objective function to optimize, but NGD can meaningfully be applied to any other objective. In fact, in Section 3.4.3, the authors advocate for preconditioning an additional loss term with the FIM.\", \"The argument in Section 3.4.1 regarding the use of the square-root on the preconditioner is not stringent. If I am understanding correctly, the argument is that $\\\\Vert \\\\nabla J/\\\\sqrt{f} \\\\Vert^2_2 \\\\approx \\\\Vert F^{-1} \\\\nabla J\\\\Vert_F^2$, i.e., preconditioning with the square-root makes the Euclidean norm of the resulting update equal the \\\"Fisher norm\\\" of the natural gradient. However, there is no discernible argument why it would be desirable to match these two quantities and, if so, why one would want to achieve this by changing the preconditioner rather than, say, scale the update with a scalar factor? (Minor: The notation should also be improved in Eq. (25) - (27), since $\\\\Vert\\\\cdot\\\\Vert$ is used to refer to both the Euclidean norm and the \\\"Fisher norm\\\".)\", \"The paper briefly cites Kunstner et al. (2019), which is an explicit critique of the interpretation of Adam as NGD, but does not really engage with the arguments in that paper.\", \"Overall, the paper combines various components, that are somewhat independent of each other:\", \"a) introduce gradient clipping,\", \"b) apply momentum after preconditioning,\", \"c) apply preconditioning to the weight decay gradient.\", \"It would be highly desirable to perform ablation studies to understand which of these changes actually matter and how they interact.\", \"The quality of the empirical evaluation is a bit lacking. No error bars are given. The hyperparameter tuning protocol is somewhat unclear, e.g., FAdam uses a different epsilon value and it is not stated how this value was obtained.\"], \"questions\": [\"What is the exact argument for the \\\"invariant natural gradient\\\"?\", \"Kunstner et al. (2019) explicitly critique the interpretation of Adam as approximate NGD. What is your response to their arguments? (E.g., degeneracy of the empirical FIM for overparametrized models, sensitivity to model misspecification, no relationship between empirical and true FIM far from an optimum.)\", \"How were the $\\\\epsilon$ values in the experiment chose?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper studies the connection between Adam optimizer and natural gradient descent by leveraging techniques from Riemannian geometry. Based on this, the authors propose a modified algorithm named Fisher Adam (FAdam). The convergence analysis of FAdam is provided and the algorithm is tested by large language model (LLM) experiments.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. Understanding Adam is an important problem. Using fisher information and natural gradient descent to understand Adam is novel.\\n\\n2. The presentation of this paper is good in general.\", \"weaknesses\": \"1. The theoretical analysis of FAdam is weak. It directly follows the paper (Defossez et al. 2020) and requires strong assumptions (e.g., $\\\\beta_1$=0, bounded gradient). So it does not analyze the algorithm's momentum and is worse than the state-of-the-art analysis of Adam in the literature.\\n\\n2. I do not find any rigorous presentation of Adam's flaws in the paper as claimed in the abstract by the authors. For example, the paper does not have any clear negative results of the vanilla Adam experimentally or theoretically. Therefore, the motivation of designing a new variant of Adam such as FAdam is unclear to me.\\n\\n3. The description of the experiment is unclear. Lots of details are missing, such as training/test learning curve comparison, learning rate of the optimizer, batch size, and memory costs. Also, the experiment is only run once, and the algorithm's robustness is unclear.\", \"questions\": \"See weaknesses section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"In this paper, the authors provide an explanation of the second-order moment $v_t$ of Adam from the perspective of the diagonal Fisher information, and propose a new optimizer, FAdam, by utilizing this perspective.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": \"This paper provides a comprehensive discussion of the previous works, and the empirical results seem supportive.\", \"weaknesses\": \"At least from my perspective, I can not grasp the main contribution of this paper. I feel it is more like a technical report or a review instead of a paper. I will list my concerns as follows:\\n\\n1. The main conclusion of this paper is somewhat unclear, and the writing is difficult to follow. From my understanding, the authors aim to claim that Adam is a variant of natural gradient descent and introduce a new optimizer, Fadam, as I outlined in the summary. However, they spend over half of the paper discussing basic statistical properties and formulas related to Fisher information, along with extensive reviews of previous works, without presenting their own results or conclusions. In contrast, the descriptions of the algorithms and the theoretical convergence results are glossed over. While I acknowledge that some discussion of prior works is necessary, I believe it should be integrated with the proposed methods and conclusions of this paper. In summary, the lengthy review of existing literature and preliminary knowledge renders the current manuscript confusing and unappealing.\\n\\n\\n2. The technical contribution of this paper is relatively insufficient. As a work proposing a new optimizer, the authors fail to provide a rigorous theoretical guarantee of convergence. The current version's convergence analysis disregards the effects of momentum, and even this incomplete result is derived directly from another paper. Furthermore, the statement that _\\u201cSince FAdam\\u2019s momentum is analogous to Polyak momentum, FAdam\\u2019s momentum also tightens the convergence bound. Therefore, the convergence bound for the natural gradient without momentum is looser than the convergence bound for FAdam,\\u201d_ is presented without adequate justification and is not convincing. It is unreasonable to assert that the convergence bound of one optimizer is looser than that of another without rigorous derivation.\", \"questions\": \"I suggest the authors rethink their major contribution of this paper. As a paper to propose a new optimizer, it might be better to first introduce the new algorithm and present the pros of this algorithm (they could be empirical results of theoretical guarantees). Although I understand some basic illustrations about the preliminary knowledge or motivation of some terms are necessary, at least for this paper, I believe the discussion in the current manuscript should be refined. For example, I do not get any interesting insights from the discussion about the connection between log-likelihood of Gaussian distribution and $\\\\ell_2$ loss, as it is a basic knowledge of statistics, and seems not deeply correlated with the Fadam.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The authors proposed a connection between the Adam optimizer and natural gradient optimization, treating the moving average of squared gradients in Adam as an estimate of the diagonal elements of the Fisher information matrix. They hypothesised that Adam's advantage over other methods might be due to its use of natural gradients; the advantage is particularly noticeable in tasks with discrete distributions, since they allow for a tighter approximation of the Fisher matrix.\\n\\nThe authors also offered a justification for the necessity of normalization by the square root of squared gradients to ensure basis invariance when averaging gradients in Adam. Additionally, they analyzed how momentum, weight decay, and clipping should function in the context of natural gradients and proposed new variants of Adam and Adafactor \\u2014 FAdam and FAdafactor. The proposed FAdam method demonstrates superior performance for models like LLMs and VQ-VAEs, as well as in ASR tasks.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The authors have identified an important and interesting connection between the success of the Adam optimization method and optimization by the natural gradient method.\\n2. The authors proposed an explanation for why Adam's advantages primarily emerge in problems with discrete distributions.\\n3. The authors established principles for using momentum, weight decay, and clipping in optimization with invariant gradients. Based on this analysis, they proposed a new method \\u2014 FAdam, which demonstrates improved performance compared to traditional Adam.\", \"weaknesses\": \"1. The authors did not provide an analysis to assess the accuracy of the approximations and simplifications used in this method.\\n \\na) Why is the transition from sampling from $p(x|\\\\theta)$ to sampling from $p_{data}$ valid? In the case of an undertrained model, the distribution $p(x|\\\\theta)$ can differ significantly from the marginalized $p_{data}$ distribution. \\n \\nb) How accurate is the transition from Eq. (20) to Eq. (21) and why it is not critical to the method's effectiveness?\\n \\nc) How accurate is the FIM approximation throughout the hundreds of optimization steps for EMA? \\n\\nIf the authors could provide ablation experiments comparing Adam, FAdam, a true natural gradient method, and other methods incorporating intermediate transitions on simple tasks (e.g., CIFAR-10), it would significantly increase confidence in the results.\\n\\n2. In the theoretical justification for preferring discrete distributions: due to uniform sampling from $p_{data}$, discrete distributions can also provide a poor approximation of the FIM. This is because the concentration of the distribution may lie in false logits, which is common in yet-not-fully-trained networks. The score might not be large enough to yield a good approximation.\\n\\n3. While Amari et al. (2019) prove that unit-wise block diagonal FIM has off-diagonal blocks smaller by $\\\\frac{1}{\\\\sqrt{n}}$, the authors' interpretation appears to extend beyond the original result. Their derived claim about individual diagonal weights dominating off-diagonal weights by $\\\\frac{1}{\\\\sqrt{n}}$ (lines 186-188) may need additional justification, as it's not directly supported by Amari's work.\\n\\n4. The absence of confidence intervals in the experimental results prevents from being fully certain of FAdam superiority, due to marginal score improvements. Additionally, providing further experiments on a broader range of domains would strengthen the evidence of the proposed method's improved performance.\\n\\n5. Minor typos: in Eq. (25), it should be the square of the norm; in Eq. (19), \\\"approx.\\\" should replace the equality sign.\\n\\nTo summarize, I believe this paper relies too heavily on unjustified approximations and is not yet ready for the conference. However, if the authors provide additional experimental and theoretical validation, I might increase my score.\", \"questions\": \"Please see the weaknesses for questions and improvements.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"This work takes a statistical viewpoint of the Adam optiumization algorithm and attempts to both explain it\\u2019s performance and add improvlemtents through the lense of the natural graident algorithm. The authors argue that Adam is effectivly preconditioning with the diagonal of the Fisher Information Matrix, which leads to the algorithm\\u2019s superior performance.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The approach of analyzing Adam from a statistical viewpoint is interesting, and while not being new (this interpretation was mentioned in the origional Adam paper) it could deserve a second look. The authors additionally show some empiracle improvments in a few settings.\", \"weaknesses\": \"While section 2.1 is likely relevant for doing a detailed analysis of Adam in the proposed framework, as far as I can tell that analysis does not actually take place in the paper or appendix. Given this this section feels quite out of place to me? I\\u2019m unsure of it\\u2019s value for the main message of the paper. Most readers in the optimization or statistics community are familiar with Fisher Information Matrix based methods and their connections to second order newton style algorithms, so I\\u2019m unsure of the value of introducing them using ideas form differential geometry on manifolds.\\n\\nThroughout this paper is repeatedly claimed that Adam is preconditioning with the diagonal of the Fisher Information Matrix, the approximation used in Adam is not the same thing in general. Adam has been connected to second order like or natural gradient like algorithms, but it is known in general that the gradient squared is not an approximation to the diagonal of the Fisher Information Matrix. While it is in expectation, in the finite data regime there are clear counter examples to this, such as in [1] which the author cites.\\n\\nOn line 250 the authors claim that Adam excels in classification tasks such as next token prediction, but this seems somewhat contradictory to the previous line where it claimed that CNNs are often better when trained with SGD. I agree that in many vision tasks SGD matches Adam in performance, but what is left out of the text is that that is true in most classification problems in vision, which is again a discrete output space. I\\u2019m additionally not sure of the strength of the claim that Adam is less strong in the generative setting. I\\u2019m aware of works such as [2] that claim the oposite (which the author cites) attempting to answer the question \\u201cWhat factors explain that Adam produces better quality solutions than SGDA when training GANs\\u201d, and propose modifications to SGD to help it compete with Adam.\\n\\nOverall several assertions are made that Adam fails on continuous regression targets, but I feel like there is not sufficient citation or experimentation to back that up. Adam excelling in discrete output spaces (which again is not always true, training ResNets with SGD is still very common) is not the same thing as Adam failing on continuous tasks, and needs to be justified if it is being claimed. Examples counter to this idea exist in the literature, such as the quadratic function minimized in figure 6 of [3] where Adam handily outperforms gradient descent.\\n\\nThe notation of some of the equations is a but unclear, for example in equation 15 while I understand the division is being coordinate wise, this should be explicit in the notation, otherwise a less familiar reader may thing we\\u2019re trying to divide a vector by another vector which is ill defined. \\n\\nCosmetically, the citation style is very non-standard and makes reading difficult, I would suggest the authors use a more standard method of in text citation.\\n\\nMinor but Adam Algorithm written in B.5 is in fact AdamW and has clipping added which was not included in the origional algorithm.\\n\\nThe central weakness of this paper in my opinion is it misunderstands how approximate the approximations in Adam are. The idea of the Adam update being connected to the diagonal of the Fisher information matrix is not new, it was mentioned in Kingma and Ba (2014). The optimization community has tried very hard to understand why Adam works (another weakness of this paper is there is no related work regarding the vast amount of research into understanding Adam) and this approach has not appeared to yield progress. The authors acknowledge the significance of these approximations in appendix B.3, but given the amount of work showing that these approximations are often very poor I don\\u2019t think the community can comfortably understand Adam as a natural gradient algorithm.\\n\\n\\n[1]\\nFrederik Kunstner, Lukas Balles, Philipp Hennig\\n\\nLimitations of the Empirical Fisher Approximation for Natural Gradient Descent\", \"https\": \"//arxiv.org/pdf/2402.19449\", \"adam_is_no_better_than_normalized_sgd\": \"Dissecting how adaptivity improves GAN performance\", \"questions\": [\"Can the authors give a citation regarding using an EMA for fisher info on line 228? I\\u2019m not aware that\\u2019s been used prior to Adam.\", \"How does the proposed framework justify clipping? In B.3 clipping and epsilon is mentioned through related work, but this step that has been added to the algorithm does not apppear to be justified by the theoretical framework.\", \"What norms are being used in equations (25)-(27)? I\\u2019m assuming the first norm is the one induced by the Fisher Information Matrix, but then what is the other one? Euclidian?\", \"Has the author tried to quantify how accurate the approximations (F)Adam is using are in a simple setting? This can help figure out if those approximations are in fact reasonable, which needs to be the case in order to claim it\\u2019s really natural gradient in disguise.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
4iFSBgxvIO
Cached Multi-Lora Composition for Multi-Concept Image Generation
[ "Xiandong Zou", "Mingzhu Shen", "Christos-Savvas Bouganis", "Yiren Zhao" ]
Low-Rank Adaptation (LoRA) has emerged as a widely adopted technique in text-to-image models, enabling precise rendering of multiple distinct elements, such as characters and styles, in multi-concept image generation. However, current approaches face significant challenges when composing these LoRAs for multi-concept image generation, particularly as the number of LoRAs increases, resulting in diminished generated image quality. In this paper, we initially investigate the role of LoRAs in the denoising process through the lens of the Fourier frequency domain. Based on the hypothesis that applying multiple LoRAs could lead to "semantic conflicts", we have conducted empirical experiments and find that certain LoRAs amplify high-frequency features such as edges and textures, whereas others mainly focus on low-frequency elements, including the overall structure and smooth color gradients. Building on these insights, we devise a frequency domain based sequencing strategy to determine the optimal order in which LoRAs should be integrated during inference. This strategy offers a methodical and generalizable solution compared to the naive integration commonly found in existing LoRA fusion techniques. To fully leverage our proposed LoRA order sequence determination method in multi-LoRA composition tasks, we introduce a novel, training-free framework, Cached Multi-LoRA (CMLoRA), designed to efficiently integrate multiple LoRAs while maintaining cohesive image generation. With its flexible backbone for multi-LoRA fusion and a non-uniform caching strategy tailored to individual LoRAs, CMLoRA has the potential to reduce semantic conflicts in LoRA composition and improve computational efficiency. Our experimental evaluations demonstrate that CMLoRA outperforms state-of-the-art training-free LoRA fusion methods by a significant margin -- it achieves an average improvement of $2.19$% in CLIPScore, and $11.25%$% in MLLM win rate compared to LoraHub, LoRA Composite, and LoRA Switch.
[ "Low-Rank Adaptation (LoRA)", "Multi-LoRA composition", "Text-to-image models", "Computational efficiency" ]
Accept (Poster)
https://openreview.net/pdf?id=4iFSBgxvIO
https://openreview.net/forum?id=4iFSBgxvIO
ICLR.cc/2025/Conference
2025
{ "note_id": [ "uHAMujouTX", "taXDtacsFF", "rgy0V0xB3p", "rJhJGUCOUl", "rBnL6W8NcF", "nLUONvmr7U", "fuOfGtDC4x", "fchoGohqDT", "ddwsk3DhAM", "dNtbuk2JCc", "bbGwpb6TvG", "XPukYTw3sh", "WAAjg2myx7", "L7zzhkGcfr", "L6ArtAGBOg", "J0q6GbsssW", "IENckjFgea", "GFpMItBaK7", "BK3OPyScYx", "9U7ZZE6Nab", "0ujXZkrdDZ" ], "note_type": [ "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1730707310538, 1732786448370, 1731986345820, 1733145551712, 1732002705299, 1731986601393, 1732021974502, 1731986629039, 1737524035012, 1732786594750, 1731986749232, 1731986529757, 1731132176797, 1731986256890, 1730724323248, 1731986503896, 1731986560742, 1734363461000, 1733145504760, 1731986464359, 1731986405248 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission10229/Reviewer_gjCY" ], [ "ICLR.cc/2025/Conference/Submission10229/Authors" ], [ "ICLR.cc/2025/Conference/Submission10229/Authors" ], [ "ICLR.cc/2025/Conference/Submission10229/Authors" ], [ "ICLR.cc/2025/Conference/Submission10229/Reviewer_pnTL" ], [ "ICLR.cc/2025/Conference/Submission10229/Authors" ], [ "ICLR.cc/2025/Conference/Submission10229/Authors" ], [ "ICLR.cc/2025/Conference/Submission10229/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission10229/Authors" ], [ "ICLR.cc/2025/Conference/Submission10229/Authors" ], [ "ICLR.cc/2025/Conference/Submission10229/Authors" ], [ "ICLR.cc/2025/Conference/Submission10229/Reviewer_pnTL" ], [ "ICLR.cc/2025/Conference/Submission10229/Authors" ], [ "ICLR.cc/2025/Conference/Submission10229/Reviewer_ok21" ], [ "ICLR.cc/2025/Conference/Submission10229/Authors" ], [ "ICLR.cc/2025/Conference/Submission10229/Authors" ], [ "ICLR.cc/2025/Conference/Submission10229/Area_Chair_GvWF" ], [ "ICLR.cc/2025/Conference/Submission10229/Authors" ], [ "ICLR.cc/2025/Conference/Submission10229/Authors" ], [ "ICLR.cc/2025/Conference/Submission10229/Authors" ] ], "structured_content_str": [ "{\"summary\": \"This paper introduces Cached Multi-LoRA (CMLoRA), a framework for training-free, multi-concept image generation that integrates multiple Low-Rank Adaptation (LoRA) modules in text-to-image diffusion models. By analyzing LoRAs in the Fourier domain, CMLoRA partitions LoRAs into high- and low-frequency sets, applying high-frequency LoRAs in early denoising stages and low-frequency ones later to reduce semantic conflicts. A novel caching mechanism selectively activates non-dominant LoRAs, enhancing computational efficiency while maintaining image quality. Evaluated against existing methods, CMLoRA shows superior performance in aesthetic and compositional quality, demonstrating its effectiveness for generating complex, coherent images from multiple concepts.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper introduces a novel Fourier-based approach to address the challenge of multi-LoRA composition by partitioning LoRA modules into high- and low-frequency categories. This frequency-aware sequencing strategy is innovative, as it moves beyond the typical naive integration of LoRAs by leveraging the frequency domain to systematically order their application during inference. This approach effectively mitigates semantic conflicts and represents a creative combination of LoRA adaptation with Fourier-based analysis, contributing a unique perspective to the field of multi-concept image generation.\\n\\n2. The paper\\u2019s methodology is sound and well-supported by rigorous experimentation. The introduction of the Cached Multi-LoRA (CMLoRA) framework is methodically detailed, with clear mathematical formulations and a thorough explanation of the caching mechanism. The empirical evaluations are comprehensive, covering a range of established metrics like CLIPScore and MLLM-based benchmarks, which validate the claims across different aspects of multi-concept image synthesis, including element integration, spatial consistency, and aesthetic quality.\\n\\n3. The proposed CMLoRA framework addresses a significant limitation in current LoRA-based image generation methods by enabling efficient and high-quality integration of multiple LoRA modules. The training-free nature of CMLoRA increases its practical applicability, making it more accessible for scenarios where training resources are limited or infeasible.\", \"weaknesses\": \"1. What are the failure cases? A couple of visual examples of failed outputs could provide more insights into the limitations of the CMLoRA method.\\n\\n2. How were the caching hyperparameters $c_1$ and $c_2$ chosen, and how sensitive is the model\\u2019s performance to their variations? Furthermore, there is limited discussion of how the caching interval impacts the final performance in terms of both computational efficiency and image quality. Additional experiments that explore the impact of varying these parameters would make the paper\\u2019s claims around caching strategy more robust and actionable for readers interested in applying or extending CMLoRA.\\n\\n3. What is the exact impact of the frequency-based LoRA partitioning, and would alternative sequencing strategies be effective?\\n\\n4. The paper\\u2019s evaluations focus primarily on a limited set of datasets (anime and realistic styles within the ComposLoRA testbed) and may not generalize to broader multi-concept applications. Furthermore, CLIPScore and the other metrics used may not fully capture nuances in compositional fidelity, particularly as the number of LoRAs increases. Expanding the scope of datasets and incorporating additional image quality metrics, such as perceptual quality or domain-specific measures, would strengthen the applicability of CMLoRA across a wider range of practical scenarios.\", \"questions\": \"1. Could you provide examples or further analysis of cases where CMLoRA might struggle with semantic conflicts? For example, are there certain LoRA combinations or types of images where the method performs suboptimally?\\n\\n2. Could you clarify how you determined the values for the caching hyperparameters $c_1$ and $c_2$? Did you observe any significant performance variations with different values, and if so, could you provide insights on optimal settings?\\n\\n3. Have you tested CMLoRA on datasets beyond the anime and realistic styles in the ComposLoRA testbed? If not, could you discuss how the method might adapt to other domains?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Reviewer ok21:\\n\\nThank you for your time and effort in reviewing our paper.\\n\\nWe firmly believe that our response and revisions can fully address your concerns. We are open to discussion if you have any additional questions or concerns, and if not, we kindly ask you to reevaluate your score.\\n\\nThank you again for your reviews which helped to improve our paper!\\n\\nAuthors\"}", "{\"title\": \"Response to Reviewer pnTL (1)\", \"comment\": \"Thank you for your detailed review and insightful comments. Please kindly see below for our responses to your comments:\\n\\n> ### Does a multiple LoRA mechanisms ensemble improve the behavior of the generative model in terms of concepts that are under-represented at the data level? \\n\\nA multiple LoRA mechanism can generally enhance the generative model's behavior, especially for concepts that are under-represented in the training data. This is because LoRA can inject prior knowledge about specific instances into the model, improving its ability to generate relevant outputs for those concepts.\\n\\nTo validate this claim, we conduct empirical experiments. However, since we do not have access to the training data of our backbone generative model, Stable Diffusion v1.5, we employ a posterior method to identify combinations of multiple concepts that may be under-represented at the data level.\\n\\nWe first define the dataset threshold being the average MiniCPM score when running the generation without LoRAs (with only text prompts, we call this Naive model) over the whole test dataset. We then define under-represented concepts as combinations of different LoRA categories that, when fused into the generative model, result in MiniCPM evaluation scores that fall below the dataset threshold, practically this means when their MiniCPM scores are below $6.778$.\\n\\nWe report CLIPScore and average MLLM evaluation metrics for images that only include these under-represented concepts in the two tables below. We compare images generated by the CMLoRA framework, alongside those produced by the naive backbone model without LoRA fusion (using the same text prompt as a condition). These images are evaluated by MiniCPM across four criteria: Element Integration, Spatial Consistency, Semantic Accuracy, Aesthetic Quality, and their overall average score.\\n| Model | Average CLIPScore |\\n| :-------------------: | :-----------------: |\\n| Naive Model | 33.8494 | \\n| CMLoRA | 34.2665 |\\n\\n| Model | Element Integration | Spatial Consistency | Semantic Accuracy | Aesthetic Quality | Average |\\n| :-------------------: | :-----------------: | :-----------------: | :-----------------: | :-----------------: | :-----------------: |\\n| Naive Model | 6.721 | 6.582 | 4.150 | 7.887 | 6.335 |\\n| CMLoRA | 7.826 | 7.715 | 7.742 | 8.516 | 7.950 |\\n\\nThe quantitative results imply that CMLoRA outperforms the Naive Model across all metrics. In terms of CLIPScore, which reflects concept alignment between generated images and textual prompts, CMLoRA scores higher (34.2665) than the Naive Model (33.8494). When evaluated on MLLM dimensions, CMLoRA demonstrates substantial improvements: it achieves higher ratings in Element Integration (7.826 vs. 6.721), Spatial Consistency (7.715 vs. 6.582), Semantic Accuracy (7.742 vs. 4.150), and Aesthetic Quality (8.516 vs. 7.887). These results, particularly the notable increase in the Semantic Accuracy score, suggest that CMLoRA provides better semantic coherence and visual quality for multi-concept image generation tasks, including concepts that are under-represented at the data level, when compared to the Naive Model.\\n\\nIn addition, we have added the proposed metrics (CLIPScore and MiniCPM evaluation scores) of the Naive Model for multi-concept generation within our testbed in Table 1 and Table 6. We also include visualizations in Figures 11\\u201312 and Figure 17. These findings demonstrate that utilizing a multiple LoRA mechanism ensemble can enhance the performance of the generative model, particularly in improving Semantic Accuracy.\\n\\n> ### Can you provide some examples in which the method shows improved semantic consistency? \\n\\nWe presented our quantitative results in Table 1 and Figures 7\\u20138. We also added some qualitative results, including real-generation examples of anime and reality multi-LoRA compositions for multi-concept image generation, in Figures 11\\u201312 and Figure 17 in Appendix C. These results demonstrate that our proposed CMLoRA effectively mitigates semantic conflicts, such as concept misalignment and concept distortion, in multi-LoRA compositions. This improvement is achieved through its frequency-domain-based LoRA scheduling mechanism, which ensures more coherent and aligned concept integration.\"}", "{\"comment\": \"Dear Reviewer gjCY:\\n\\nWe would like to express our sincere gratitude for your valuable time and effort in reviewing our work. We are writing to kindly remind you that the discussion period is drawing to a close.\\n\\nIf you have any remaining questions or concerns about our paper, we would be grateful for the opportunity to address them. We are happy to provide any clarifications you may require.\\n\\nWe fully understand your busy schedule and deeply appreciate your dedication to the review process. Thank you once again for your time.\\n\\nAuthors\"}", "{\"title\": \"Computational cost overview\", \"comment\": \"So, just to summarize, when comparing Tab. 1 with Tab.5, for N=5 the method achieves an advantage (in terms of CLIP Score), over SwitchA of 0.091 with 839 GMACs more in computation. Similarly, for N=4, the difference between CMLoRA and LoRA Hub (see the typo in the main paper!!) is quantified as an advantage of 0.073 in CLIPScore at a disadvantage of 434 more GMACs, or for N=3, CMLoRA exhibits a disadvantage with a deficit of 0.168 CLIPScore with also 493 more GMACS. The question is how one can prove that the additional computational cost is needed, or what represents a marginal improvement over the current SOTA, under which such an added computational cost could not bejustified? (with the obvious question of how much one can trust the CLIPScore in this comparison!)\"}", "{\"title\": \"Response to Reviewer gjCY (1)\", \"comment\": \"Thank you for your thoughtful reply, and we are fully aware of your concerns. Please kindly see below for our responses to your comments:\\n\\n> ### Could you provide examples or further analysis of cases where CMLoRA might struggle with semantic conflicts? For example, are there certain LoRA combinations or types of images where the method performs suboptimally?\\n\\nOur experiments are conducted under the setting of using a single instance from each category. However, when multiple instances from the same LoRA category (*i.e.*, LoRAs with similar frequency spectra) are introduced during image generation, CMLoRA may encounter challenges with semantic conflicts. For example, as illustrated in Figure 18 of Appendix D.1, combining $F1$ (character) and $F2$ (animal) using CMLoRA may result in a failure of multi-concept composition, leading to a phenomenon known as Concept Vanish.\\n\\nThis limitation reflects a general drawback of existing training-free LoRA composition methods. Without incorporating additional prior knowledge about region or layout features, such as bounding box constraints or masked attention maps, the generative model lacks the capacity to effectively combine multiple LoRAs within similar semantic categories. This limitation is particularly problematic when multiple concepts within the same conceptual category need to be localized independently.\\n\\nWe added a detailed analysis of these limitations, along with visual demonstrations of failure cases, in Appendix D.\\n\\n> ### Could you clarify how you determined the values for the caching hyperparameters c1 and c2? Did you observe any significant performance variations with different values, and if so, could you provide insights on optimal settings?\", \"we_provide_details_on_the_selection_of_hyperparameters_in_appendix_e\": \"Ablation Analysis. Specifically, we employ a posterior confidence interval check to determine the non-uniform caching interval used during the denoising process. In addition, we select optimal caching modulation hyperparameters $c\\\\_1$ and $c\\\\_2$ based on a grid search method.\\n\\nOur analysis reveals that when $c\\\\_1<c\\\\_2<4$, the content of the generated image exhibits minimal variation, with only slight fluctuations observed in the CLIPScore. However, when $5<c\\\\_1<c\\\\_2$, we observe a notable deterioration in performance.\"}", "{\"title\": \"Response to Reviewer pnTL\", \"comment\": \"The primary contribution of our proposed CMLoRA lies in multi-concept image generation, particularly for scenarios with $N>2$ concepts. While CLIPScore is widely reported as a traditional metric in image generation literature, it is not well-suited for evaluating images with multiple user-specific concepts. This limitation is why we emphasize Figure 8, which offers a more robust and fair comparison of CMLoRA ($\\\\text{Cache}\\\\_{D}$) against other multi-LoRA composition methods based on MLLM evaluation win rates. With the proposed caching mechanism, CMLoRA achieves average win rates of $57.7$% against LoRA Hub and $60.4$% against Switch-A across $N=2,3,4,5$, clearly demonstrating its effectiveness despite the higher computational cost. Additionally, Tables 6 and 7 provide detailed MLLM evaluation scores for all investigated Multi-LoRA composition methods, offering comprehensive support for this analysis.\\n\\nWe corrected the minor typo error in our main paper.\"}", "{\"title\": \"Response to Reviewer gjCY (2)\", \"comment\": \"> ### Have you tested CMLoRA on datasets beyond the anime and realistic styles in the ComposLoRA testbed? If not, could you discuss how the method might adapt to other domains?\\n\\nWe selected some well-trained LoRAs in Civitai [1], one of the largest available AIGC social platforms, to conduct experiments. We added animals and buildings as new categories to our LoRA test set. Our new testbed includes the following LoRA categories: Character, Cloth, Style, Background, Object, Animal, and Building. However, we find introducing Animal and Building concepts in the multi-LoRA composition will lead to potential semantic conflicts, such as Concept Vanish or Concept Distortion, if we choose multiple concepts in a similar semantic group to compose an image.\\n\\nWe report CLIPScore and average performance metrics for images including animal and building LoRA generated by different multi-LoRA composition methods, evaluated by MiniCPM across four criteria: Element Integration, Spatial Consistency, Semantic Accuracy, Aesthetic Quality, and their average below.\\n\\n| Model | Average CLIPScore |\\n| :-------------------: | :-----------------: |\\n| CMLoRA | 33.370 |\\n| Switch | 33.076 |\\n| Composite | 31.341 |\\n| Merge | 28.292 |\\n\\n| Model | Element Integration | Spatial Consistency | Semantic Accuracy | Aesthetic Quality | Average |\\n| :-------------------: | :-----------------: | :-----------------: | :-----------------: | :-----------------: | :-----------------: |\\n| CMLoRA | 7.719 | 7.832 | 5.375 | 8.166 | 7.273 |\\n| Switch | 7.063 | 7.042 | 5.235 | 7.182 | 6.631 |\\n| Composite | 6.702 | 6.584 | 4.169 | 6.965 | 6.105 |\\n| Merge | 4.168 | 5.239 | 3.152 | 3.913 | 4.118 |\\n\\nBased on our observation, we find that the performance of Semantic Accuracy decreases among all LoRA composition methods, since all methods may omit certain LoRA in the similar semantic group, if we choose multiple concepts in a similar semantic group to compose an image, as shown in Figure 18 in Appendix D.1. LoRA Composite and LoRA Merge deteriorate the most among all LoRA composition methods, since they use the information fused by all LoRAs to compose during the denoising process. We will report all proposed metrics (Clipscore and MiniCPM evaluation score) across investigated multi-LoRA composition methods after running all experiments in our paper.\\n\\nAs highlighted in Appendix D Limitations, a significant issue in the field is the absence of a detailed taxonomy for multi-concept image generation classes. This gap poses challenges in systematically classifying well-defined conceptual groups, particularly due to the semantic overlaps that inherently exist among some conceptual categories. These overlaps blur the boundaries between different conceptual categories, making it difficult to establish a robust and well-defined multi-LoRA composition testbed. However, if different LoRA categories possess distinct frequency spectra characteristics, our proposed CMLoRA approach can still perform effectively. Specifically, we can use the LoRA partition method based on Fourier analysis illustrated in Section 2.2 to profile those LoRA categories and use multiple LoRAs to compose images following the generation pipeline of CMLoRA. \\n\\nAs mentioned in the previous paragraph, we have added our limitation discussion in Appendix D about how the lack of a detailed multi-concept image generation class taxonomy and a well-defined testbed is inherently a limitation.\\n\\n> ### What is the exact impact of the frequency-based LoRA partitioning, and would alternative sequencing strategies be effective?\\n\\nThe exact impact of the frequency-based LoRA partitioning is demonstrated through additional visualizations provided in Appendix C. These visualizations compare CMLoRA with multi-LoRA composition methods that do not utilize frequency-based LoRA partitioning. We have also explored alternative sequencing strategies, as discussed in [2], and included the corresponding experimental results in Appendix F.1 Order of LoRA Activation. These results further demonstrate the robustness of CMLoRA.\\n\\n[1] \\\"The Home of Open-Source Generative AI.\\\" Civitai, civitai.com/. Accessed 18 Nov. 2024.\\n\\n[2] Ming Zhong, Yelong Shen, Shuohang Wang, Yadong Lu, Yizhu Jiao, Siru Ouyang, Donghan Yu, Jiawei Han, and Weizhu Chen. Multi-lora composition for image generation. arXiv preprint arXiv:2402.16843, 2024.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"comment\": \"Dear Reviewer gjCY,\\n\\nWe appreciate all of the valuable time and effort you have spent reviewing our paper. \\n\\nAs the discussion period concludes in five days, we gently request that you review our reply and consider updating your evaluation accordingly. We believe that we have addressed all questions and concerns raised, but please feel free to ask any clarifying questions you might have before the end of the discussion period. \\n\\nAuthors\"}", "{\"title\": \"Response to Reviewer gjCY (3)\", \"comment\": \">### Furthermore, CLIPScore and the other metrics used may not fully capture nuances in compositional fidelity, particularly as the number of LoRAs increases. Expanding the scope of datasets and incorporating additional image quality metrics, such as perceptual quality or domain-specific measures, would strengthen the applicability of CMLoRA across a wider range of practical scenarios.\\n\\nTraditional image metrics, while effective for general use cases, have significant shortcomings when applied to scenarios that demand nuanced evaluations of compositional fidelity, especially in out-of-distribution (OOD) contexts. These metrics may compress evaluation ranges and fail to discern the intricate qualities of individual elements in multi-LoRA compositions [2], resulting in marginal performance gains that do not accurately reflect actual advancements.\\n\\nTo address this evaluation gap, we leverage the capabilities of multi-modal large language models (MLLMs) to evaluate composable multi-concept image generation. Using in-context few-shot learning, MLLMs are better equipped to handle challenges posed by OOD samples, offering a more nuanced and context-aware assessment of compositional and quality aspects. This enhanced framework not only addresses the evaluation gap but also ensures a fair and comprehensive validation of the improvements brought by CMLoRA.\\n\\nWe include a detailed explanation in Appendix D Limitations.\\n\\nWe sincerely appreciate it if you could kindly consider improving the scores if we have sufficiently addressed the concerns. We are very happy to answer any further questions you may have.\\n\\n[2] Ming Zhong, Yelong Shen, Shuohang Wang, Yadong Lu, Yizhu Jiao, Siru Ouyang, Donghan Yu, Jiawei Han, and Weizhu Chen. Multi-lora composition for image generation. arXiv preprint arXiv:2402.16843, 2024.\"}", "{\"title\": \"Response to Reviewer ok21 (3)\", \"comment\": \"> ### The observation is based on Lora categories from Ref1. How does the method perform with respect to different Lora categories?\\n\\nWe selected some well-trained LoRAs in Civitai [5], one of the largest available AIGC social platforms, to conduct experiments. We added animals and buildings as new categories to our LoRA test set. Our new testbed includes the following LoRA categories: Character, Cloth, Style, Background, Object, Animal, and Building. However, we find introducing Animal and Building concepts in the multi-LoRA composition will lead to potential semantic conflicts, such as Concept Vanish or Concept Distortion, if we choose multiple concepts in a similar semantic group to compose an image.\\n\\nWe report CLIPScore and average performance metrics for images including animal and building LoRA generated by different multi-LoRA composition methods, evaluated by MiniCPM across four criteria: Element Integration, Spatial Consistency, Semantic Accuracy, Aesthetic Quality, and their average below.\\n\\n| Model | Average CLIPScore |\\n| :-------------------: | :-----------------: |\\n| CMLoRA | 33.370 |\\n| Switch | 33.076 |\\n| Composite | 31.341 |\\n| Merge | 28.292 |\\n\\n| Model | Element Integration | Spatial Consistency | Semantic Accuracy | Aesthetic Quality | Average |\\n| :-------------------: | :-----------------: | :-----------------: | :-----------------: | :-----------------: | :-----------------: |\\n| CMLoRA | 7.719 | 7.832 | 5.375 | 8.166 | 7.273 |\\n| Switch | 7.063 | 7.042 | 5.235 | 7.182 | 6.631 |\\n| Composite | 6.702 | 6.584 | 4.169 | 6.965 | 6.105 |\\n| Merge | 4.168 | 5.239 | 3.152 | 3.913 | 4.118 |\\n\\nBased on our observation, we find that the performance of Semantic Accuracy decreases among all LoRA composition methods, since all methods may omit certain LoRA in the similar semantic group, if we choose multiple concepts in a similar semantic group to compose an image, as shown in Figure 18 in Appendix D.1. LoRA Composite and LoRA Merge deteriorate the most among all LoRA composition methods, since they use the information fused by all LoRAs to compose during the denoising process. We will report all proposed metrics (Clipscore and MiniCPM evaluation score) across investigated multi-LoRA composition methods after running all experiments in our paper.\\n\\nAs highlighted in Appendix D Limitations, a significant issue in the field is the absence of a detailed taxonomy for multi-concept image generation classes. This gap poses challenges in systematically classifying well-defined conceptual groups, particularly due to the semantic overlaps that inherently exist among some conceptual categories. These overlaps blur the boundaries between different conceptual categories, making it difficult to establish a robust and well-defined multi-LoRA composition testbed. However, if different LoRA categories possess distinct frequency spectra characteristics, our proposed CMLoRA approach can still perform effectively. Specifically, we can use the LoRA partition method based on Fourier analysis illustrated in Section 2.2 to profile those LoRA categories and use multiple LoRAs to compose images following the generation pipeline of CMLoRA. \\n\\nAs mentioned in the previous paragraph, we have added our limitation discussion in Appendix D about how the lack of a detailed multi-concept image generation class taxonomy and a well-defined testbed is inherently a limitation.\\n\\n> ### The collective guidance in eq (5) seems related to classifier free guidance, can you please provide further analysis?\\n\\nEach element in the collective guidance functions as classifier-free guidance corresponding to the generative model with a single conceptual LoRA. By applying weighted summation to these elements, CMLoRA ensures harmonized guidance throughout the image generation process, enabling the cohesive integration of all elements represented by the different LoRAs.\\n\\nIn response to the reviewer's suggestions, we have added Section A.4.1 in the Appendix to illustrate the relationship between CMLoRA and classifier-free guidance, providing further clarification of their relevance.\\n\\n[5] \\\"The Home of Open-Source Generative AI.\\\" Civitai, civitai.com/. Accessed 18 Nov. 2024.\"}", "{\"summary\": \"In this paper, the authors propose an analysis of typical LoRA algorithms when subjected to a caching mechanism. The study is further extended with the proposal of a framework integrating multiple LoRA mechanisms, aiming at reducing concept-related uncertainty, which is expected to show reduced semantic misconceptions. The proposed method is extensively evaluated in terms of CLIPScore and MiniCPM-V testing.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The paper is generally well written and, (at least for the class of similar papers) rather easy to follow.\\nThe claims of the authors, on which the paper writing discourse is based on, are verified through evaluations which can become clear, if correctly exemplified.\", \"weaknesses\": \"Even if the writing is good, the quality of the visuals (e.g Fig 4, 6) can be improved.\\nA lack of visual comparisons is not expected, given the fact that the most of the evaluations showing a certain advantage of the proposed method are either purely subjective or extremely difficult to quantify. \\nAt least in terms of quantitative evaluations (in terms of CLIPScore), the introduction of the cache mechanism does not show consistent results, but rather mixed. A systemic improvement/degradation of the performance its difficult to identify or explain, at least for the cache mechanism analysis. \\nA total lack of evaluations in terms of computational effort/efficiency.\", \"questions\": \"Does a multiple LoRA mechanisms ensemble improve the behavior of the generative model in terms of concepts that are under-represented at the data level?\\nCan you provide some examples in which the method shows improved semantic consistency? \\nWhy are the claims at the end of page 9 and the beginning of page 10 not proven through a visual comparison?\\nWhat (or how can be quantified) is the computational effort of the compared methods?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"It's unlikely that this work has more potential to generate harmful images than the previous published work.\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Summary of Revisions\", \"comment\": \"We thank all the reviewers for their valuable feedback and insightful suggestions. Based on the reviews, we have made the following revisions to our paper:\\n\\n1. We provide additional visual comparisons of generated images across varying numbers of $N$ LoRA candidates. (Appendix C) [Reviewer pnTL, ok21, gjCY]\\n\\n2. We expand our analysis to elucidate the motivation behind shifting attention from the spatial domain to the frequency domain, with a comprehensive discussion. (Section 1) [Reviewer ok21]\\n\\n3. We enhance the clarity of our method by presenting detailed explanations in a progressive way. (Section 2) [Reviewer ok21]\\n\\n4. We extend the scope of our study to consider a wider range of multi-concept applications, incorporating additional LoRA categories and expanding the meta LoRA categories within the ComposLoRA testbed. (Appendices C, D) [Reviewer ok21, gjCY]\\n\\n5. We provide a thorough explanation of the computational costs associated with the investigated multi-LoRA composition methods under investigation. (Appendix B.2) [Reviewer pnTL, ok21, gjCY]\\n\\n6. We introduce a new section highlighting the limitations of our proposed CMLoRA framework and discussing failure cases in multi-concept generation. (Appendix D) [Reviewer pnTL]\\n\\n7. We have improved the quality of visual illustrations (e.g., Figures 4 and 6) and addressed minor typo errors throughout our work. [Reviewer pnTL]\\n\\nWe have also addressed each reviewer\\u2019s comments with more detailed, in-depth responses. Once again we appreciate all the suggestions made by reviewers to improve our work. It is our pleasure to hear your feedback, and we look forward to answering your follow-up questions.\"}", "{\"summary\": \"This paper works on fixing issues of using Lora for multi-concept image generation. Particularly, this paper empirically find that some LoRAs amplify high-frequency features, and others focus on low- frequency elements. Based on this observation, a frequency domain based sequencing strategy is presented to determine the optimal order in which LoRAs should be integrated during inference, and a training-free framework, namely Cached Multi-LoRA (CMLoRA), is designed to integrate multiple LoRAs while maintaining cohesive image generation. Experiments suggest that CMLoRA outperforms SOTA training-free LoRA fusion methods for multi-concept image generation.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1.Frequence domain analysis for multi-component generation is indeed an interesting idea.\\n\\n2.The proposed solution is easy and clear (although high-level insight is not very obvious.)\\n\\n3.The experiments are good in explain the effectiveness of the solution.\", \"weaknesses\": \"1.It\\u2019s not clear why frequency domain is needed to solve the multi-component generation task. A clear investigation and analysis on how they come up with this solution can further strengthen the contribution of the work. Particularly, more analysis is needed to explain why shift attention from spatial domain to frequency domain.\\n\\n2.The observation that some LoRAs amplify high-frequency features, and others focus on low- frequency elements is based on a na\\u00efve experiment. More analysis or theoretical analysis is needed to better appreciate the proposed idea.\\n\\n3.The experimental results is good but not convincing to explain the superiority of the solution.\", \"questions\": \"1.I\\u2019m not sure about Figure 1. Do you assume that meaningful amplitude difference happens only at the same time steps for the two Loras? In another word, do you assume different Lora categories are well-aligned along time step? Furthermore, given that the observation in Figure 1 motivates the proposed method, it\\u2019s suggested to provide comprehensive analysis to explain the high/low frequency issues of different Loras.\\n\\n2.The proposed solution in Section 2.2 is presented without deep analysis. Can you please provide a high-level analysis of your solution to explain your method in a progressive way? e.g. eq (2), eq (3) is introduced directly without explain why.\\n\\n3.The observation is based on Lora categories from Ref1. How does the method perform with respect to different Lora categories? \\n\\n4.The collective guidance in eq (5) seems related to classifier free guidance, can you please provide further analysis?\\n\\n5.Benchmark comparison in Table 1 seems marginal performance gain. Please explain further.\\n\\n6.Please also explain in detail the \\u201csemantic conflict\\u201d issue as there exists no experiments to verify the existence of this issue (or maybe I failed to find it, please show me where I can find it.)\\n\\nRef1, Multi-lora composition for image generation, 2024.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer ok21 (2)\", \"comment\": \"> ### The proposed solution in Section 2.2 is presented without deep analysis. Can you please provide a high-level analysis of your solution to explain your method in a progressive way? e.g. eq (2), eq (3) is introduced directly without explain why.\\n\\n**Background:** Frequency analysis has proven highly effective in image analysis tasks, such as [1], [2], [3], [4], due to its key advantages: \\n1. **Efficient image feature detection:** Frequency decomposition highlights image features that are often challenging to capture in the spatial domain.\\n2. **Robustness to noise in the spatial domain:** Noise can be effectively isolated and filtered using frequency-based methods.\\n\\n**Motivation:** Building on these principles, our motivation stems from observations made in Figure 1, where we find that certain LoRAs introduce more pronounced high-frequency modifications during denoising, whereas others primarily influence low-frequency elements. Furthermore, high-frequency components are predominantly fused during the early stages of inference, as confirmed by prior work: high-frequency components vary more significantly than low-frequency ones throughout the denoising process [1]. \\n\\nWe assume that different LoRA categories exhibit distinct behaviors during the denoising process, because they fuse features with varying amplitudes across different frequency domains into the generated image. Consequently, improper integration of various LoRAs may result in visual artifacts or semantic inconsistencies in the generated images. Specifically, as shown in Figure 2, we find that directly applying pre-trained LoRA modules to compose the image often leads to semantic conflicts (see LoRA Merge and Switch). This failure primarily arises because independent LoRAs are integrated to contribute equally to image generation during the denoising process.\\n\\n**Analysis:** Motivated by the phenomenon observed in Figure 1, we propose a Fourier-based method to classify LoRAs with different frequency responses and group them into distinct sets, as shown in Figures 2 and 3. Through our profiling approach, we categorize LoRAs into high-frequency and low-frequency sets. During inference, high-frequency LoRAs are primarily utilized in the early stages of denoising to enhance detail and texture, while low-frequency LoRAs are predominantly applied in the later stages to refine overall structure and coherence.\\n\\n**Method:** In Equation 2, we first computer the average feature map $\\\\overline{\\\\mathbf{x}}\\\\_{t}$ along the channel dimension $C$ at denoising time $t$. We quantify the amplitude of high-frequency components in the generated image by analyzing its distribution across the frequency spectrum in Equation 3. Then we calculate the change in amplitude of high-frequency components between each time interval during the denoising process in Equation 4. \\n\\nBased on Equation 4, we can perform the profiling on the LoRA categories in the testbed: 1) establishing a prioritized LoRA order strategy $\\\\mathcal{O}$ using the ranking of variation in the intensity of high-frequency components $\\\\overline{\\\\Delta\\\\mathcal{H}\\\\_{0.2}\\\\left(\\\\overline{\\\\mathbf{x}}\\\\_{t};20\\\\right)}$ across different LoRA categories. 2) Following the strategy $\\\\mathcal{O}$, we can categorize LoRAs into a high-frequency dominant set $H$ and a low-frequency dominant set $L$ for a multi-LoRA composition task. 3) LoRAs from the high-frequency dominant set $H$ are employed predominantly during the initial stages of denoising, where their dynamic features can effectively enhance the image\\u2019s detail and texture. In contrast, LoRAs from the low-frequency dominant set $L$ are utilized primarily in the later stages of the denoising process.\\n\\nTo ensure a seamless understanding of our approach, we have expanded the explanations in Section 2, covering the process from categorization and profiling to scheduling. \\n\\n[1] Chenyang Si, Ziqi Huang, Yuming Jiang, and Ziwei Liu. Freeu: Free lunch in diffusion u-net. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4733\\u20134743, 2024.\\n\\n[2] Frank, Joel, et al. \\\"Leveraging frequency analysis for deep fake image recognition.\\\" International conference on machine learning. PMLR, 2020.\\n\\n[3] Jiang, Liming, et al. \\\"Focal frequency loss for image reconstruction and synthesis.\\\" Proceedings of the IEEE/CVF international conference on computer vision. 2021.\\n\\n[4] Li, Jia, et al. \\\"Finding the secret of image saliency in the frequency domain.\\\" IEEE transactions on pattern analysis and machine intelligence 37.12 (2015): 2428-2440.\"}", "{\"title\": \"Response to Reviewer ok21 (4)\", \"comment\": \"> ### Benchmark comparison in Table 1 seems marginal performance gain. Please explain further.\\n\\nThe seemingly marginal performance gain observed in Table 1 arises primarily from the limitations of the traditional image evaluation metric, CLIPScore, which we initially used to benchmark multi-LoRA composition methods. While CLIPScore is effective in evaluating general image-text alignment within its domains, it has significant shortcomings when applied to scenarios requiring the assessment of out-of-distribution (OOD) concepts, such as user-specific instances. Its evaluations may fall short in capturing specific compositional and quality aspects, as it lacks the capability to discern the nuanced features of individual elements [6]. This limitation inherently results in a compressed range of evaluation scores for multi-LoRA composition methods, causing improvements to appear marginal despite significant advancements in comprehensive compositional quality.\\n\\nTo address this evaluation gap, we leverage the capabilities of multi-modal large language models (MLLMs) to evaluate composable multi-concept image generation. Using in-context few-shot learning, MLLMs are better equipped to handle challenges posed by OOD samples, offering a more nuanced and context-aware assessment of compositional and quality aspects. This enhanced framework not only addresses the evaluation gap but also ensures a fair and comprehensive validation of the improvements brought by CMLoRA.\\n\\nWe include a detailed explanation in Appendix D Limitations.\\n\\n> ### Please also explain in detail the \\u201csemantic conflict\\u201d issue as there exists no experiments to verify the existence of this issue (or maybe I failed to find it, please show me where I can find it.)\\n\\nWe added Figure 2 to demonstrate three types of semantic conflicts raised in multi-LoRA composition methods. In the spatial domain, potential semantic conflict of multi-concept composition may be: 1) Concept Misalignment: some concepts are generated with false semantic information 2) Concept Vanish: some concepts may be completely ignored 3) Concept Distortion: some concepts are incorrectly combined. \\n\\nThe first case may arise from insufficient semantic information from a specific LoRA being fused into the generated image. The second scenario may result in a dominance issue, where one LoRA model overshadows the contributions of others, causing the generation process to lean heavily towards its specific attributes or style, thereby failing to generate a balanced representation. In the third case, multiple content-specific LoRAs where features intended to represent different subjects blend indistinctly, leading to a loss of integrity and recognizability for each concept.\\n\\nWe attribute the semantic conflict to the frequency discordance of multi-LoRA composition during the denoising process, since LoRAs are typically trained independently and fuse features across different frequency domains to the generated image. \\n\\nWe also include additional visual examples of semantic conflict in Appendix D. \\n\\nWe hope this time our additional results can sufficiently resolve your concerns. We sincerely appreciate it if you could kindly consider improving the score, and are very happy to answer any further questions you may have.\\n\\n[6] Ming Zhong, Yelong Shen, Shuohang Wang, Yadong Lu, Yizhu Jiao, Siru Ouyang, Donghan Yu, Jiawei Han, and Weizhu Chen. Multi-lora composition for image generation. arXiv preprint arXiv:2402.16843, 2024.\"}", "{\"metareview\": \"The paper was reviewed by three experts and, initially, they provided unanimous \\\"5: marginally below the acceptance threshold\\\".\\n\\nThe authors provided responses to reviewers' raised concerns and new results.\\n\\nOnly Reviewer pnTL participated in the discussion with the authors and concluded that \\\"The proposed method clearly presents some advantages over the current SOTA in terms of various aspects regarding image generation quality. However, this comes at a price, obvious in terms of computational cost.\\\"\\n\\nReviewers ok21 and gjCY did not react to the authors' responses, and did not provide final assessments.\\n\\nThe AC carefully checked the paper, the reviews and the responses and agrees with Reviewer pnTL that the proposed method clearly has merits in comparison with the current state-of-the-art but is computationally heavy, and that most of the Reviewers ok21 and gjCY raised concerns were addressed by the authors.\\nAll-in-all, it is an interesting work with good results and of interest for the image generation community.\\n\\nTherefore the AC recommends acceptance.\", \"additional_comments_on_reviewer_discussion\": \"Only Reviewer pnTL participated in the discussion with the authors and concluded that \\\"The proposed method clearly presents some advantages over the current SOTA in terms of various aspects regarding image generation quality. However, this comes at a price, obvious in terms of computational cost.\\\"\\n\\nThe other two reviewers did not provide a final, post-rebuttal, assessment, nor interacted with the authors despite the fact that the authors provided early responses.\"}", "{\"comment\": \"Dear Reviewer ok21:\\n\\nWe would like to express our sincere gratitude for your valuable time and effort in reviewing our work. We are writing to kindly remind you that the discussion period is drawing to a close.\\n\\nIf you have any remaining questions or concerns about our paper, we would be grateful for the opportunity to address them. We are happy to provide any clarifications you may require.\\n\\nWe fully understand your busy schedule and deeply appreciate your dedication to the review process. Thank you once again for your time.\\n\\nAuthors\"}", "{\"title\": \"Response to Reviewer ok21 (1)\", \"comment\": \"Thank you for the thoughtful questions. Please kindly see below for our responses to your comments:\\n\\n> ### I\\u2019m not sure about Figure 1. Do you assume that meaningful amplitude difference happens only at the same time steps for the two Loras? In another word, do you assume different Lora categories are well-aligned along time step? Furthermore, given that the observation in Figure 1 motivates the proposed method, it\\u2019s suggested to provide comprehensive analysis to explain the high/low frequency issues of different Loras.\\n\\nWe assume that different LoRA categories exhibit distinct behaviors during the denoising process due to their fusion of varying semantic information into the generated image. Since LoRAs are typically trained independently and fuse features with varying amplitudes across different frequency domains into the generated image, integrating these independent LoRAs with equal contributions to image generation may introduce inherent conflicts.\\n\\nAs discussed in Section 1 Introduction, our motivation stems from an observation in an example: the Character LoRA fuses a higher proportion of high-frequency components, resulting in greater variation in edges and textures compared to the Background LoRA, at the inference stage. This insight forms the basis of our hypothesis that frequency domain scheduling can help multi-concept generation. Specifically, our finding suggests that certain LoRAs introduce more pronounced high-frequency modifications during denoising, whereas others primarily influence low-frequency elements. This can be explained as follows: (1) Some LoRAs enhance high-frequency components, corresponding to rapid changes like edges and textures. (2) Others target low-frequency components, representing broader structures and smooth color transitions.\\n\\nFurthermore, we observed that high-frequency components of LoRAs are predominantly fused during the early stages of inference, as shown in Figure 3. This observation aligns with prior work showing that high-frequency components vary more significantly than low-frequency ones throughout the denoising process [1]. Consequently, improper integration of various LoRAs may result in visual artifacts or semantic inconsistencies in the generated images. \\n\\nTo make these points clearer, we have edited the text in our introduction and methods sections. Additionally, we have added Figure 2 to the introduction to visually demonstrate how existing methods fail due to these semantic conflicts.\\n\\n[1] Chenyang Si, Ziqi Huang, Yuming Jiang, and Ziwei Liu. Freeu: Free lunch in diffusion u-net. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4733\\u20134743, 2024.\"}", "{\"title\": \"Response to Reviewer pnTL (2)\", \"comment\": \"> ### Why are the claims at the end of page 9 and the beginning of page 10 not proven through a visual comparison?\\n\\nDue to space constraints, we focused on presenting key results in the main text through a radar map (Figure 7) and win rate figures (Figure 8) to support our first claim, and Figures 9\\u201310 to substantiate our second claim.\\n\\nIn response to the reviewer\\u2019s suggestion, we have expanded our visual analysis in Appendix C to strengthen our claims further.\\n\\n**For the first claim**, we added Figures 11\\u201312 and Figure 17 in Appendix C. These figures validate that CMLoRA demonstrates superior performance compared to other multi-LoRA fusion methods in resolving semantic conflicts during multi-LoRA composition. They highlight CMLoRA\\u2019s effectiveness in addressing challenges such as misalignment and distortion, ensuring cohesive multi-concept integration.\\n\\n**For the second claim**, we introduced Figures 13\\u201316 in Appendix C. These figures provide visual comparisons of generated images across varying numbers of LoRA candidates ($S1$: Character, $S2$: Clothing, $S3$: Background, and $S4$: Object) and demonstrate how our proposed caching mechanism ($Cache_{D}$) enhances conceptual LoRA performance. The figures illustrate that as the number of LoRAs $N$ increases, $Cache_{D}$ becomes increasingly critical in optimizing composition outcomes and improving overall performance.\\n\\nThese additional figures provide comprehensive visual evidence that addresses the reviewer\\u2019s concern and further supports the claims made in the paper.\\n\\n> ### What (or how can be quantified) is the computational effort of the compared methods?\\n\\nWe provide a detailed analysis of the computational cost associated with the investigated multi-LoRA composition methods in Appendix B.2. The results, summarized in Table 5, present the computational cost for all methods, measured in terms of Multiply-Accumulate Operations (MACs). Additionally, we specifically visualize the computational cost of CMLoRA in Figure 11 in Appendix E.2, offering further insights into its efficiency.\\n\\n> ### At least in terms of quantitative evaluations (in terms of CLIPScore), the introduction of the cache mechanism does not show consistent results, but rather mixed. A systemic improvement/degradation of the performance its difficult to identify or explain, at least for the cache mechanism analysis. \\n\\nWhile traditional metrics like CLIPScore are widely used for general evaluations, they exhibit notable limitations in scenarios requiring nuanced assessments, such as compositional fidelity in out-of-distribution (OOD) contexts. These metrics may compress evaluation ranges and fail to discern the intricate qualities of individual elements in multi-LoRA compositions, resulting in marginal performance gains that do not accurately reflect actual advancements.\\n\\nTo provide further clarity regarding the cache mechanism analysis, we have expanded our discussion in Appendix B.2 Computational Cost Analysis. The key findings indicate that while the computational cost of the cache mechanism ($Cache_{D}$) is positioned between that of uniform caching mechanisms $Cache_{c=2}$ and $Cache_{c=3}$, multi-LoRA composition methods utilizing $Cache_{D}$ consistently outperform those employing other uniform caching strategies. This highlights the effectiveness of $Cache_{D}$ in balancing computational efficiency with improved compositional fidelity.\\n\\nWe sincerely appreciate it if you could kindly consider improving the scores if the above response can sufficiently address your concerns. We are very happy to answer any further questions you may have.\"}" ] }
4hp2bVdaHU
Data-Aware Training Quality Monitoring and Certification for Reliable Deep Learning
[ "Farhang Yeganegi", "Arian Eamaz", "Mojtaba Soltanalian" ]
Deep learning models excel at capturing complex representations through sequential layers of linear and non-linear transformations, yet their inherent black-box nature and multi-modal training landscape raise critical concerns about reliability, robustness, and safety, particularly in high-stakes applications. To address these challenges, we introduce YES training bounds, a novel framework for real-time, data-aware certification and monitoring of neural network training. The YES bounds evaluate the efficiency of data utilization and optimization dynamics, providing an effective tool for assessing progress and detecting suboptimal behavior during training. Our experiments show that the YES bounds offer insights beyond conventional local optimization perspectives, such as identifying when training losses plateau in suboptimal regions. Validated on both synthetic and real data, including image denoising tasks, the bounds prove effective in certifying training quality and guiding adjustments to enhance model performance. By integrating these bounds into a color-coded cloud-based monitoring system, we offer a powerful tool for real-time evaluation, setting a new standard for training quality assurance in deep learning.
[ "Deep learning", "data-driven bounds", "training process", "training quality monitoring", "safe AI", "reliable AI training", "regulatable AI", "performance certification" ]
https://openreview.net/pdf?id=4hp2bVdaHU
https://openreview.net/forum?id=4hp2bVdaHU
ICLR.cc/2025/Conference
2025
{ "note_id": [ "w4Fk4aoM9J", "s1gQC5BPgx", "qkfmfiKAc4", "pHTvOJsGgo", "otS19oj1gu", "oKYr4fqyz1", "mpDQTnw4Ke", "lHU2jDXBys", "hHH8VQRq5O", "gAS52tudQ7", "eZN7Ned5Cc", "cniaI84WB9", "bxXqTSlffF", "bLIURZbDSa", "aNM0ztDvIX", "VzKvoiiI7W", "SEDR9h1ew2", "NpOdOI8CIp", "IpUo4hShYE", "FlYcqu3vvT", "DWgXxFDboW", "CofkooSzwc", "5zzkZKqEge", "2j24SFwMvJ" ], "note_type": [ "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1730605632383, 1732320824245, 1730554525606, 1732352569411, 1732394036615, 1732575788077, 1732328167215, 1732574424070, 1733204239111, 1733177440899, 1732737415304, 1733175714898, 1732575001520, 1729764720008, 1738103152077, 1732546730124, 1730593096599, 1732322720412, 1732328671915, 1732737571456, 1732327836922, 1732503597601, 1732392381556, 1732324947414 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission12694/Reviewer_jcCC" ], [ "ICLR.cc/2025/Conference/Submission12694/Authors" ], [ "ICLR.cc/2025/Conference/Submission12694/Reviewer_cAaf" ], [ "ICLR.cc/2025/Conference/Submission12694/Reviewer_SByb" ], [ "ICLR.cc/2025/Conference/Submission12694/Authors" ], [ "ICLR.cc/2025/Conference/Submission12694/Authors" ], [ "ICLR.cc/2025/Conference/Submission12694/Authors" ], [ "ICLR.cc/2025/Conference/Submission12694/Authors" ], [ "ICLR.cc/2025/Conference/Submission12694/Reviewer_jcCC" ], [ "ICLR.cc/2025/Conference/Submission12694/Authors" ], [ "ICLR.cc/2025/Conference/Submission12694/Reviewer_SByb" ], [ "ICLR.cc/2025/Conference/Submission12694/Reviewer_gBVS" ], [ "ICLR.cc/2025/Conference/Submission12694/Authors" ], [ "ICLR.cc/2025/Conference/Submission12694/Reviewer_SByb" ], [ "ICLR.cc/2025/Conference/Submission12694/Authors" ], [ "ICLR.cc/2025/Conference/Submission12694/Reviewer_cAaf" ], [ "ICLR.cc/2025/Conference/Submission12694/Reviewer_gBVS" ], [ "ICLR.cc/2025/Conference/Submission12694/Authors" ], [ "ICLR.cc/2025/Conference/Submission12694/Authors" ], [ "ICLR.cc/2025/Conference/Submission12694/Reviewer_SByb" ], [ "ICLR.cc/2025/Conference/Submission12694/Authors" ], [ "ICLR.cc/2025/Conference/Submission12694/Reviewer_gBVS" ], [ "ICLR.cc/2025/Conference/Submission12694/Authors" ], [ "ICLR.cc/2025/Conference/Submission12694/Authors" ] ], "structured_content_str": [ "{\"summary\": \"This paper introduces YES training bounds, a framework for real-time, data-aware certification and monitoring of neural network training. The framework evaluates data utilization efficiency and optimization dynamics, providing insights into training progress and detecting suboptimal behavior. The paper validates the YES bounds using synthetic and real data experiments, offering a tool for certifying training quality and guiding performance enhancements.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The proposed system's clarity is enhanced by the color-coded cloud-based monitoring system, which makes it intuitive for practitioners to interpret training status visually.\", \"weaknesses\": \"This paper has over-claimed its applicability, especially in model robustness and safety. None of the experiments discuss model safety and robustness. Accepting this paper unchallenged may send the wrong signal that the proposed method for enhancing model safety or robustness has been vetted, which it has not due to the omissions of related experiments.\", \"questions\": [\"Can you clarify how the YES training bounds directly contribute to improvements in model robustness and safety?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We appreciate the reviewer\\u2019s insightful comments. We would like to highlight that model safety and robustness have multiple key elements, with training quality being a cornerstone that significantly influences reliability. If the training performance is poor, it is unreasonable to expect meaningful or reliable results from the model. In this paper, our primary objective is to monitor the quality of the training process. Leveraging our proposed mathematically grounded framework, we introduce the YES bounds and their associated cloud system, which serve as a sanity check for the optimizer. While we agree with the reviewer that we did not exhaustively show the impact of our bounds on test results, we have results (Fig 4 of Supplemental material) that show the correlation between training quality and test outcomes. We will further include MNIST test results for both training and test stages in the revised manuscript. We appreciate your comments bringing to our attention that without clarifying how central training quality is in model reliability, the contribution may be understood as over-stated. We hope that our clarifications have been helpful. Thank you!\"}", "{\"summary\": \"This paper introduces YES training bounds, a framework for real-time, data-aware certification of training, which aims at assessing the quality of model training. Specifically, these bounds evaluate the efficiency of data utilization and optimization dynamics. The depth and non-linear activation functions of models are taken into consideration. Experimental validation on synthetic and real data demonstrates the effectiveness of the bounds in certifying training quality.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The discussed topic is interesting, which focuses on monitoring the training quality and progress. With the proposed bounds, users could better control the optimization, which could benefit the community.\\n2. By considering the specific structure and properties of training data, the proposed YES bounds could provide tailored and precise evaluations of training performance.\\n3. Some experiments on image denoising task demonstrate the effectiveness of the proposed bounds.\", \"weaknesses\": \"1. No detailed discussion of relevant works, which makes it difficult to situate this paper. Some discussions of relevant optimization works are missing, such as [a, b].\\n2. The contribution of this paper could be overclaimed. YES bounds are introduced to indicate the training quality. Although the authors claim that the proposed training bounds aim at improving the reliability, robustness, and safety of models, it is difficult to see how the YES bound can be utilized for such a purpose.\\n3. This paper discusses the scenario of non-direct paths in Section 4.2.1, however, it is difficult to see how the enhanced bounds that involve intermediate points can tackle this issue.\\n4. Another concern lies in the evaluation, it is unclear whether the proposed training bounds can be generalized to different CV and NLP applications, such as image generation via diffusion, VQA via LLaVa, etc.\\n5. It is also unclear how the proposed training bounds can motivate new research works or provide insights into this field.\\n\\n[a]. Generalization Bounds for Stochastic Gradient Descent via Localized \\u03b5-Covers. NeurIPS 2022.\\n\\n[b]. Closing the convergence gap of SGD without replacement. ICML 2020.\", \"questions\": \"1. Please discuss the relevant work in the field of optimization and clarify the novelty of this paper.\\n2. Please clarify the contribution and explain the application of YES bounds in the field of model robustness and reliability.\\n3. Please explain why the enhanced bounds in Section 4.2.2 can tackle the issues discussed in Section 4.2.1.\\n4. Please provide more evaluation on different CV/NLP tasks to highlight the generalization of the proposed bounds.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"I need to see the revised manuscript\", \"comment\": \"I appreciate the authors' effort in responding to my concerns. However, I cannot change my mind regarding any of my concerns until I have seen the promised new results (e.g. results for training on MNIST or table with performance comparison). Please make use of the functionality of this forum to edit your submission and upload the revised manuscript. I would also want (as Reviewer gBVS also mentioned) to have the appendix included in the main manuscript for easier navigation through the paper.\"}", "{\"comment\": \"**W10**: Thanks for the comments. You would likely agree that when the network is optimized, the optimal corresponding intermediate mapping points $\\\\mathbf{Y}_k$ (the outcomes of the layers) are typically not equal to $\\\\mathbf{Y} $. In fact the role of training is to find the optimal intermediate mapping points. I think you will also agree that, if we knew what intermediate mapping points $\\\\mathbf{Y}_k$ are located at optimal training, we could have created much better bounds\\u2014intuitively, the closer the chosen $\\\\mathbf{Y}_k$ to the optimal values, the better the bounds our YES framework can create. That is why we have a mechanism to import data from the training process as well, to enhance the bounds at the same time the training itself is getting better.\"}", "{\"comment\": \"Thank you for your continued feedback and for expressing your concerns about our paper. We appreciate the opportunity to clarify our contributions and address the points you've raised.\\n\\nWe understand that our previous reference to SB 1047 may not have effectively illustrated the relevance of our work. Rather, we aimed to highlight the broader movement towards AI safety and the importance of developing technical tools that can support future regulatory efforts.\", \"our_main_point\": \"While there may not be specific legislation regulating training quality at this moment, it is crucial for the research community to proactively develop technical solutions that make such regulation feasible in the future. Effective regulation must refer to measurable benchmarks to assess compliance and performance. Our work contributes to this foundational groundwork by providing a practical and quantifiable method for monitoring training quality.\", \"impact_of_our_work\": \"The YES Training Bounds framework offers a novel, data-aware method for real-time monitoring of neural network training quality. This benchmark can serve as a reference point for practitioners and potentially for future regulatory standards to ensure AI systems are trained properly. Our framework could assist in defining industry best practices and standards for training quality assessment. By detecting suboptimal training behaviors, such as loss plateaus in non-optimal regions, practitioners can intervene promptly to adjust training strategies. This reduces the risk of deploying AI systems that have been inadequately trained.\", \"addressing_your_concerns_directly\": \"While we do not anticipate legislation specifying technical details like \\\"training loss plateauing,\\\" regulators will require measurable criteria to assess AI systems' safety and effectiveness. The YES bounds provide measurable criteria that regulators can refer to, offering a way to standardize training quality assessments across different models and applications.\", \"conclusion\": \"We believe that for reasons that include our initial phrasing of the response the impact of the work was underestimated. We hope that this clarification addresses your concerns and demonstrates the value and relevance of our contributions. We are committed to advancing the field in ways that are both technically meaningful and practically impactful.\\n\\nThank you again for your thoughtful feedback.\\n\\nSincerely,\\n\\nThe Authors\"}", "{\"comment\": \"**W1**: Thank you for highlighting this. We will cite these works and emphasize their contributions in the introduction of our revised manuscript. However, please note that our study focuses on monitoring the training process during epochs. In Section F of the supplemental material, we provided a brief suggestion for optimization adjustments, but further investigation is beyond the scope of this paper and would be an exciting direction for future research.\\n\\n**W2**: We appreciate the reviewer\\u2019s insightful comments. We would like to highlight that model safety and robustness have multiple key elements, with training quality being a cornerstone that significantly influences reliability. If the training performance is poor, it is unreasonable to expect meaningful or reliable results from the model. In this paper, our primary objective is to monitor the quality of the training process. Leveraging our proposed mathematically grounded framework, we introduce the YES bounds and their associated cloud system, which serve as a sanity check for the optimizer. While we agree with the reviewer that we did not exhaustively show the impact of our bounds on test results, we have results (Fig 4 of Supplemental material) that show the correlation between training quality and test outcomes. We will further include MNIST test results for both training and test stages in the revised manuscript. We appreciate your comments bringing to our attention that without clarifying how central training quality is in model reliability, the contribution may be understood as over-stated. We hope that our clarifications have been helpful. Thank you!\\n\\n**W3**: By direct path, we refer to the scenario that we progressively project the input of each layer to the output $\\\\mathbf{Y}$. In our proposed framework, YES-k, we chose a non-direct path approach which selects the intermediate mapping points $\\\\mathbf{Y}_k$ from the training process, itself. Our motivation for this is that it has been known in ML venue that when training minimizes the objective function in Eq.~(1) in the main paper, then we probably achieve meaningful intermediate mapping points that minimizes the objective greatly. Therefore, we anticipated that if we choose these mapping points from the intermediate layers of the training itself, we may hope for a tighter bound compared to YES-0. In the numerical results, we can observe this very phenomenon which indicates that as training improves it is able to provide enhanced intermediate mapping points for our YES training bound machinery.\\n\\n**W4**: In this work, we introduced data-driven bounds for fully connected networks, marking the first study in this area. Future research could extend our approach to various network architectures, such as CNNs, by applying these bounds to their fully connected layers or extend these bounds to architectures used in various applications, such as NLP and diffusion models, as you mentioned. Please note that we have an evaluation of our proposed bounds on the MNIST dataset, which will be added in the revised manuscript.\\n\\n**W5**: We have a research roadmap to extend our framework to apply it to CNNs, DUNs (deep unfolding networks), and PINNs.\"}", "{\"comment\": \"Thank you for your thoughtful feedback and for taking the time to reassess our manuscript. We appreciate your acknowledgment of the interesting aspects of our work and understand that our previous response may not have fully addressed your concerns.\\n\\n--- Allow us to clarify: The reliability and robustness of a machine learning model are fundamentally rooted in the quality of its training process. A model trained under suboptimal conditions is more likely to perform unpredictably or fail when exposed to new data. By providing a framework to monitor and certify the training quality in real-time, the YES bounds help ensure that models are being trained effectively, which is a crucial step toward building reliable and robust systems.\\n\\n--- We understand your concern regarding the limited scope of our experimental evaluation and the convincing power of Figure 4 in the supplemental material. In response to your feedback, we have conducted additional experiments on the MNIST dataset (as can be seen now in the revised manuscript), a standard benchmark in machine learning. These experiments demonstrate that the YES bounds effectively monitor training quality in a classification task and correlate with both training and test performance.\\n\\n--- Our approach is grounded in solid mathematical principles from optimization theory. This theoretical robustness suggests that the YES bounds have general applicability across different models and training scenarios. While extensive empirical evaluations are valuable, we hope you agree that a strong theoretical foundation provides a more enduring contribution to the field. \\n\\n--- Our work serves as a pioneering step toward a new line of research focused on training quality, which has often been overshadowed by emphasis on generalization.\\n\\n--- We believe that for reasons that include our initial phrasing of the response the impact of the work was underestimated. We hope this detailed explanation addresses your concerns and demonstrates the relevance and potential impact of our work on the reliability, robustness, and safety of machine learning models.\\n\\nThank you for your consideration, and we welcome any further questions or suggestions you may have.\\n\\nSincerely,\\n\\nThe Authors\"}", "{\"comment\": \"I have carefully read the authors' rebuttal as well as other reviewers' feedback. I recommend the author conduct more studies on how prior art addresses model safety, it is not enough to simply claim that training quality significantly affects reliability, and therefore think that the proposed method will benefit model reliability. The author should make significant revisions to the paper to better reflect the true contribution of the paper.\"}", "{\"comment\": \"Thank you for your valuable feedback. The primary focus of our paper is to monitor the training process's performance using a provided heuristic solution rather than assessing fairness or reliability during the testing phase. We aim to offer a straightforward, real-time method for evaluating training performance across epochs.\\n\\nBased on your comment, we want to clarify that we do not analyze a pre-trained neural network for the testing stage. Instead, we continuously monitor the training process. If the solution falls within the \\\"green region\\\" during training, we can state that it is suitable for use in the test phase.\\n\\nFrom our understanding of your concern, your focus is on the reliability and performance of the neural network during testing. Could you please clarify whether you recommend adding fairness and reliability assessments during the testing phase in addition to monitoring the training process?\"}", "{\"title\": \"Reply to Authors (1/2)\", \"comment\": \"I thank the authors for responding to my concerns and for uploading the revised version of their manuscript along with new experimental details. Below I will discuss my previous concerns and the extent to which they were addressed.\\n\\n**W1**: I appreciate the effort to run additional experiments on a more reality-grounded setting with the MNIST dataset. The details provided in the revised manuscript show that the method can be applied to a more realistic scenario. However, the new experiments bring me some more questions:\\n\\n**W1.1**: I understand that to suit the experimental setup of the YES bounds, the output of the classification problem has to be transformed into a matrix with the same size as the input, instead of consisting of a 10-dimensional vector. Why was this necessary? From my perspective there should be no problem to have the last layer as a linear projection to a lower dimensional space. If this limitation cannot be easily addressed within the YES bounds framework, it may indicate challenges in adapting the method to broader scenarios.\\n\\n**W1.2**: While the results obtained with only 5000 samples for train and test are still valuable, I don\\u2019t understand why this down-scaling is necessary. MNIST is one of the smallest image datasets, and training a network that gets over 90% test accuracy can be done in less than 1h even without GPU resources. From my perspective this points toward some scalability issues with the method. Could the authors comment on this? \\n\\n**W1.3**: While the YES bounds can be used to correctly identify a better learning rate, it would be helpful to clarify their distinct advantages compared to simpler heuristic approaches, such as analyzing early learning curves. From the provided plots, it can be easily inferred that 5e-4 is a better value for the learning rate even after 10-25 epochs.\\n\\n**W1.4**: The YES bounds identify a green region toward the end of training even with a poor choice of learning rate. However, the train/test performance of the network does not seem to be in a good state. One might think that when reaching the green area the training goes well. How should this be interpreted?\\n\\n**W2.1**: I am even more confused after reading the response. Naming the inputs with X and the outputs with b in one case just to revert them in the other does not seem appropriate. The revised manuscript could benefit from clearer and more consistent notation to reduce potential confusion. \\n\\n**W2.2**: The authors have also not provided an explanation of how the signal recovery is expected to work in the 1D Signal Denoising task, given the high noise level and the ambiguous (random?) nature of the signal. I don\\u2019t understand whether there exists a mathematical function that can map the noisy signal to the original signal in this artificial setup. In the context of digit classification, the main assumption is that there exists a mathematical function composed of matrix multiplications and non-linear activations which can map the image to the corresponding digit. Moreover, there is an assumption that the inputs to the function follow some non-uniform and low-entropy data distribution (because with very high probability sampling an image with random pixels does not yield any recognisable digit). In this case, the notion of denoising (e.g. by diffusion networks) is usually formulated as finding a sample that is close to the noisy sample, while also having high probability in the original data distribution. In the presented artificial experiments, the underlying signal distribution is just a normal distribution, having high entropy and therefore I don\\u2019t understand how the signal denoising can be formulated. \\n\\n**W3**: Addressed by adding the MNIST experiments.\\n\\n**W4**: The promised table is not present in the revised manuscript. I feel this is even more important when considering the **W1.2** question I raised about the MNIST experiments. I know it might be too late to upload a new revised manuscript, but you can still provide the table in a markdown format in your next response.\\n\\n**W5**: Partially addressed by the MNIST experiments. I still argue that it would be more relevant to have the practical experiments in the main paper rather than the appendix.\"}", "{\"comment\": \"I thank the authors for the further clarifications and appreciate the continued friendly and polite tone throughout the peer review process despite the extensive criticism.\\n\\nI maintain my view on the paper despite these clarifications. I suggest the reviewers provide a very detailed step-by-step reasoning for the YES bound.\", \"i_want_to_give_one_example_how_this_might_look\": \"Take a NN used as diagnostic tool for cancer based on images. Before deployment the model is examined in many ways: sensitivity and specificity of predictions, model performance under distribution shift, fairness criteria are studied, system-level risk for adversaries is assessed, assessment of uncertainty metrics is done to create a decision risk model (low training loss can btw be associated with worse uncertainty calibration), all of these are validated in a prospective study (e.g. randomized controlled double-blind clinical trial) establishing patient utility. What can the YES bound provide in terms of safety that weighs on the decision to approve the model for clinical use? This really just is one example, but the point is: what does this add that other methods cannot do. Why is the YES sub-optimality so important *beyond* other measurements?\\n\\nOnce again, I thank the author vor engaging with the reviewers.\"}", "{\"comment\": \"Thank you for your thoughtful feedback and for taking the time to reassess our manuscript. We would like to inform you that we have uploaded the revised manuscript. For the reviewers' convenience, we have included the appendix in the main paper. The MNIST classification results can be found in Section F of the appendix.\\n\\nThank you for your consideration, and we welcome any further questions or suggestions you may have.\"}", "{\"summary\": \"The paper explores the challenge of monitoring the quality of training for deep learning models. The paper proposes a method based on upper bounds for the training loss, estimated using linear projections from different layers during training. The authors show that comparing the training loss with the estimated bounds can provide real-time insights into the quality of the training procedure, facilitating the intervention in case of ineffective or sub-optimal training.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The idea of using linear projections as a weak model to compute upper bounds on the training loss is interesting and original.\\n\\nThe method is well-described and easy to understand. Mathematical proofs are easy to follow.\\n\\nGetting insights on the training process beyond the local optimization perspective might be a promising and relevant direction for the future of ML optimization.\", \"weaknesses\": [\"The paper only analyzes small networks with FeedForward architecture and only Linear layers. It is not clear how the procedure could be applied to practical architectures such as ResNet or Transformers. Additionally, there are no concrete details about the architectures used. I assume all hidden layers have the same dimensionality as the input and output, but this should be clearly stated. I suggest the authors extend their experiments to more complex architectures and to include a detailed description of the architectures used (e.g. a table with the number of layers, number of hidden neurons per layer, activation functions).\", \"The tasks used for training the networks are not clearly explained. It is not clear what is the input and output of the models, what is the loss function and whether the trained models can achieve any significant generalization power. In particular, for the 1D Signal Denoising task, it is unclear how the random signal drawn from $N(0,1)$ could be recovered after applying noise drawn from $N(0,0.2)$. I would like to see more detailed explanations regarding the tasks and the training setup.\", \"The practical image recovery experiment in the Appendix does not seem to have real practical applications. My understanding is that the model is specifically trained to recover one image after it has been corrupted, which would require full access to the uncorrupted image during training. If this is correct, then this example has no more practical applicability than the synthetic data tasks. It is also unclear how the network is constructed and what are its inputs and outputs. I suggest adding some more detailed explanations about the training and the model architecture.\", \"The paper presents no analysis of the running time of the proposed method. It is unclear how the method will impact the training time of the model under real-time conditions. It would be interesting to see a comparison of training times with and without the YES bounds computation and an analysis of how the computational overhead scales with model size (I would expect the costs of estimating higher order YES bounds to significantly increase for very deep models).\", \"The utility of the proposed method concerning train-test generalization is only presented in the Appendix. I consider that this experiment should be presented in the main body of the paper, as ultimately the purpose of training models is generalization to unseen test data. However, it is unclear how well the model is able to generalize in this case: Figure 4 clearly shows a correlation between the evolution of the training and testing losses, but the minimum value of the test loss is similar to the starting value of the training loss, hinting towards very poor generalization capabilities. This is very likely caused by the low correlation between train and test data, so repeating the experiment on real-world data might show more favorable results. Additionally, this experiment does not present any training details (learning rate, batch size, train/test split ratio etc.).\", \"**Minor points**\", \"Algorithm 1 should have the model as input along with the training data.\", \"**Formatting**\", \"The Appendix should be part of the main pdf, not as a separate document\", \"References to Sections, Figures, Tables, Equations and Citations should have hyper-ref links for better navigability.\", \"Multiple citations are not well integrated into the text, for example: Line 031 should read \\u201c... transformations (LeCun et al.2015, Goodfellow et al. 2016).\\u201d which can be achieved by using the \\\\citep{} command. Line 041 should read \\u201cOymak & Soltanolkotabi (2019) theoretically demonstrate \\u2026\\u201d which can be achieved by using the \\\\citet{} command.\", \"Line 050 has an unmatched parenthesis after YES.\", \"Figures 3,4 should have a clear ordering for the learning rates (instead of 1e-3, 1e-2, 1e-4). Subfigure labels for Figures 3,4,5 would also facilitate understanding.\"], \"questions\": \"1. Please address my comments in the Weaknesses section.\\n2. The experiments show that sometimes the training gets stuck in loss plateaus while in the yellow region. Is there any action that one might take to avoid or overcome this obstacle, other than waiting for the loss to drop? \\n3. It is not clear how Deep Unfolding Networks are relevant in the context of this paper, as they are only briefly mentioned in section 4.2.1, without sufficient explanation, and then never mentioned again. Can you give a more detailed explanation?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"comment\": \"I thank the authors for their response. However, most of my concerns have not been well addressed. For example, it is still difficult to see the clear evidence that YES bound can contribute to the reliability, robustness, and safety of models. The results (Fig 4 of Supplemental material) are not convincing enough. The provided evaluation is rather limited, which is inconsistent with the claims made by the authors. Thus, I maintain my score.\"}", "{\"summary\": \"The paper presents a heuristic for convergence diagnostics in multi-layer perceptrons that is data-aware. The authors propose to use solve a OLS-like linear problem for each layer to determine what a reasonable, but suboptimal weight matrix for each layer is. They then propose a traffic light system that compares training loss to this heuristic.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"S1: Convergence diagnostics is a worthwhile problem to study. The criticism of local, curvature based convergence diagnostics is justified and hence, providing a empriical, yet strategic \\\"benchmark\\\" framework is an interesting approach.\", \"s2\": \"The proposed method is simple and readily accessible, even to a non-specialist audience, that is likely to benefit most from \\\"training support\\\".\", \"weaknesses\": \"**Motivation**\", \"w1\": \"I am sceptical of the utility of a \\\"standardization of training practices in deep learning\\\" (L078). The author's more detailed account that \\\"The proposed YES [is] a promising pathway toward establishing a benchmark for the AI industry, regulators, and users alike\\\" (L070) is vague and does not provide concrete details on _how_ this is valuable. For instance, the authors could provide examples or consult recent legislation on this issue.\", \"w2\": \"Prior work is not properly cited. In fact, only a single paper (Oymak et al.) on neural network optimisation / convergence diagnostics is cited.\\n\\n**Presentation**\", \"w3\": \"The Appendix is missing.\", \"w4\": [\"The language and notation for the main results is not always clear. Examples:\", \"Eq 3 uses $\\\\mathbf{Y}_k$ without ever defining it before. I suggest the authors provide a clear definition first.\", \"First paragraph in Section 4.1 is not clear (L183 - 189). Same for L245-L253.\", \"The paper describes a theoretical optimal model, the actual model, and the \\\"bounding\\\" model obtained by setting weights through the pseudoinverse. It is not always clear which of these models weight matrices or activations belong to.\", \"It is not clear what \\\"intermediate states\\\", \\\"intermediate mappings\\\", \\\"intermediate points\\\" are.\"], \"w5\": [\"The authors are inconsistent in their claims and tend to overstating their contributions. While the authors state at times that their method is a \\\"sanity check\\\", they later state:\", \"\\\"[...] can attest that the training is not proper.\\\" (L205)\", \"\\\"The answer (YES or NO) will provide immediate relief as to whether training has been meaningful at all\\\" (L211)\", \"\\\"The reason is simple: heuristics outperform random, and optimal beats heuristic\\\" (L521) or L534 to L536.\", \"\\\"These bounds aim to provide a qualified answer to the question as to whether a neural network is being properly trained by the data: YES or NO?\\\" (L179).\", \"\\\"cloud unequivocally indicate suboptimality\\\" (L085). This is a very strong statement, for which there is no supporting evidence.\", \"A more precise account of the contributions and limitations would be appropriate as well as fewer absolute statements without justification.\"], \"w6\": \"The proposed method is a _heuristic_ not a certificate, which typically describes provable statements that can be asserted about a model.\", \"w7\": \"The authors are overstating the impact to the safety of models. This work predominantly cites literature on ML safety, but does not set out a clear path how their work impacts safety, how it can establish trust and how it will help regulators. The statement \\\"This standardization could play a crucial role in fostering trust and accountability within the AI ecosystem.\\\" lacks proper justification.\\n\\n**Method**\", \"w8\": \"Using a linear model as baseline during training as bechnmark is common practice (when appropriate). The main insight seems to be the prediction of the target from each intermediate layer. I disagree with the authors statement: \\\"A sensible but sub-optimal approach\\\" (L185) and do not see sufficient justification for this statement. The authors state: \\\"it has also been observed in various machine learning problems that after extensive training (resembling what we can describe as optimal training), the output of some inner layers become something meaningful to domain experts\\\" (L255-258), for which they do not provide sources.\", \"w9\": \"layerwise OLS solutions are a very basic heuristic that do not mark significant contributions to the ML community. One can trivially, see that this bound becomes vacuous, even for single layer models when replacing ReLU with sigmoid (i.e. regard data-generating model $\\\\sigma(AX+e)$ where $A$ has large values.)\", \"w10\": [\"At times, the authors do not provide sufficient proof or citation when making non-trivial statements.\", \"\\\"Given a judicious selection of ... the latter should provide a tighter error bound compared to the YES-0 bounding approach\\\" (L252). What is a judicious selection? Such claim should be supported by a theorem or a more detailed analysis.\", \"L255-258 as cited above.\"], \"w11\": \"The authors solely focus on the train loss, when in practice the test loss is most relevant in learning problems.\\n\\n**Experiments**\", \"w12\": \"The paper describes experiments on 2 small toy data. This is not enough to extrapolate to real impact. There are various regression and classification datasets that seem like suitable tests for this method. In particular, Boston / California Housing Prices, SVHN, CIFAR-10(0) or TinyImageNet. Convergence diagnostics will become more relevant the more complicated the loss landscape and the more non-linear the problem gets. At the same time the proposed bound will become more vacuous in these settings. A detailed discussion of this would be of interest. Larger, more complex, and high-dimensional datasets are important to judge the potential impact of this method.\", \"w13\": \"The authors do not compare their method against diagnostics baselines. For instance, fitting a simple one-layer linear model, or discussing other strategies for convergence diagnostics.\", \"w14\": \"Figure 3 stops before models are converged.\\n\\n**Minor**\", \"mw1\": \"It is community standard to have pdf links between in-text citation and the bibliography. This would be appreciated. Citations should be in round parantheses when not not part of the sentence's grammatical structure, e.g. (Goodfellow et al., 2016).\", \"mw2\": \"Figure 3: should share X and Y axis.\", \"mw3\": \"The lack of algorithm boxes, makes it difficult to follow the exact procedures described.\", \"questions\": \"Q1: Could the authors please clarify L090-L092: \\\"They do not produce varying certification results across different training realizations, even when initialized identically or following similar optimization paths.\\\"\", \"q2\": \"Why would randomness (L088-L095) be such an issue?\", \"q3\": \"$\\\\mathbf{Y}_{k}$ is never defined. What does it denote?\", \"q4\": \"Could the authors clarify L245-L253. How does one obtain the YES-SIGMA bound precisely?\", \"q6\": \"How will the YES cloud help regulators?\", \"q7\": \"Is there a formalism to justify why projecting from each layer to $Y$ is reasonable?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"**W1**: We appreciate the reviewer's comment. The focus of this work is on reliable training, which serves as a necessary condition for reliable test performance. To address the reviewer's concern, we will provide the example of California legislation on the importance of safe AI:\\nCalifornia is considering several legislative measures to regulate artificial intelligence, focusing on advanced AI systems' safety and accountability. One of the most significant is Senate Bill 1047 (SB 1047), which aims to establish safety and compliance standards for developers of powerful AI models, particularly those with the potential to cause significant harm.\", \"key_provisions_of_sb_1047\": \"\", \"1___independent_audits\": \"AI developers must annually engage third-party auditors to evaluate their compliance with safety protocols, including assessing potential risks and reporting any non-compliance to the California Attorney General\\u200b\", \"2__risk_assessments\": \"Companies are required to perform pre-deployment risk evaluations and submit detailed compliance reports annually, outlining any critical harm their models might cause and the safeguards in place\\u200b.\", \"3_incident_reporting\": \"Any significant safety incidents must be reported within 72 hours\\u200b\", \"4_computing_cluster_regulations\": \"Operators of large-scale computing systems must monitor AI training activities, verify customer identities, and retain data to ensure compliance. They are also required to shut down non-compliant AI training activities if necessary\\u200b\\n\\nLegislation like this highlights a growing emphasis on regulatory frameworks for AI safety. Tools such as the YES training cloud could play a pivotal role in such contexts. For instance, regulators might introduce a training quality compliance clause with requirements such as: \\u201cAI developers must ensure training quality by demonstrating that the model\\u2019s training loss has plateaued within the green region of the YES training cloud. This evidence must be documented and retained for inspection, particularly in applications involving public utility.\\u201d This demonstrates the relevance and potential impact of reliable training in meeting emerging regulatory demands and underscores the broader significance of our work.\\n\\n**W2**: Thank you for this comment. We will add as many as possible relevant references in the revised manuscript. \\n\\n**W3**: We submitted the Appendix as a separate file and we apologize for any confusion this may have caused.We will integrate it into the main paper.\\n\\n**W4**:\\n- Thank you for bringing this to our attention. We apologize for overlooking the inclusion of the notation's definition prior to Equation~3. $\\\\mathbf{Y}_k$ means the output of $(k-1)$-th layer.\\n\\n- We appreciate the reviewer\\u2019s comment. In this context, by \\\"A sensible but sub-optimal approach,\\\" we refer to a heuristic method that one can utilize to optimize the weight matrices. We say it is sensible since the main goal of the network is to project the input to $\\\\mathbf{Y}$ if there is no any side information of the intermediate mapping points. The reason for suboptimality is (i) we progressively project the input data $\\\\mathbf{X}$ into the output data $\\\\mathbf{Y}$ in each layer, and (ii) we (almost) overlook the non-linearity $\\\\Omega$ and obtain the weight matrices following Eq.~(10). Please refer to our response to comment W10, where we provide clarification on the reasoning presented in L245-L253.\\n\\n- In our paper, the bounding model is used to evaluate the YES bounds. To clarify this point, we will add a further explanation in the revised manuscript.\\n\\n- To address the concern raised by these words, please note that all these words refer to $\\\\mathbf{Y}_{k}$, the output of $(k-1)$-th layer.\\n\\n**W5**: The purpose of a sanity check is to assess whether the approach\\u2014in this case, the training process\\u2014is functioning as expected. We believe all these statements refer to the sanity check using the YES bounds, which are used to monitor the training process's performance and verify its effectiveness. When we refer to the sub-optimality indicated by our color-coded cloud, we mean that if a solution falls within the red or yellow regions, it signifies the existence of better solutions within the optimization landscape, for instance the solutions provided by YES-0 and YES-$k$. This implies that such solutions are not optimal.\\n\\n\\n.\"}", "{\"comment\": \"**W1**: In this work, we introduced data-driven bounds for fully connected networks, representing the first study in this area. Future research could extend our approach to various network architectures, such as CNNs, by applying these bounds to their fully connected layers or adapting them to architectures used in diverse applications, including transformers, as you suggested. Additionally, we tested our proposed bounds on the MNIST dataset and will include these results in the revised manuscript. Based on your feedback, we will also add more comparisons involving different numbers of layers. As you correctly pointed out, this paper assumes that the input and output dimensions are the same. We will emphasize this more clearly in the revised manuscript.\\n\\n**W2**: To address the reviewer's concern, in phase retrieval model, $\\\\mathbf{X}_i$ and $\\\\mathbf{b}_i$ denote the input and output data, respectively. In the signal denoising model, $\\\\mathbf{b}_i$'s denote the input and $\\\\mathbf{X}_i$'s are the output. The loss function that we considered in all examples was MSE. These examples are synthetic and we just presented them to clearly explain our method and the applicability of the associated YES clouds. However, for the generalization power, we applied our bounds to the MNIST dataset, which we will add in the revised manuscript, and as can be seen, our bound can successfully monitor the performance of digit identification.\\n\\n**W3**: We totally agree with the reviewer on this since we only considered one image in the training process to show the monitoring power of our bounds in the training regime. However, as we discussed already, we refer the reviewer to see our results on MNIST classification, which is a very common dataset in ML literature.\\n\\n**W4**: Thank you for your valuable comment. We will include a table comparing the CPU performance for the models we examined in the paper, both with and without the YES bounds.\\n\\n**W5**: To address your comment, please see the MNIST classification result in the revised manuscript. \\n\\n**Q2**: Thank you for highlighting this. Please note that our study focuses on monitoring the training process during epochs. While optimization adjustments are beyond the scope of this paper, exploring how YES bounds could be used to modify the optimization process would be an exciting direction for future research. However, in Section~F of the supplemental material, we provided a brief suggestion for this purpose.\\n\\n**Q3**: We use the Deep Unfolded Network (DUN) as an example to illustrate our point. In a DUN, each iteration of a classical algorithm is unrolled into a network layer, with each layer corresponding to one iteration of the iterative solver. In this context, we expect that increasing the number of iterations will lead to better convergence, meaning that the loss at each layer's output should decrease monotonically, similar to how error decreases with more iterations in traditional algorithms. However, this is not the case with DUNs; they do not exhibit a strictly monotonic behavior.\\n \\nThis suggests that the training process in DUNs might take an indirect path through the optimization landscape to find better solutions, which is similar to the approach we adopted. Additionally, like our model, DUNs assume the input and output dimensions are the same.\"}", "{\"title\": \"Reply to Authors (2/2)\", \"comment\": \"**Q2**: I still consider that more thorough insights and experiments would be needed to be able to affirm that the YES bounds can actually be useful in adjusting the training process. The discussion in Appendix F (now Appendix G in the revised manuscript) is rather vague and could be strengthened by providing concrete recommendations or examples of how the proposed distance indicator could be effectively used to select hyperparameters in practice. To be convinced by the utility of the YES bounds, I would like to see at least one proposal for a method to choose hyperparameters that works when the trivial methods (such as simply monitoring the performance in th first 10 epochs for different hyperparameter values and greedily choosing the best one) fail to provide the choice that leads to the best final performance.\\n\\n**Q3**: I feel the analogy with the DUN networks is not very clear. My basic understanding (which might be far from complete) from the short description in the paper that a DUN is conceptually similar to a transformer. They both have a list of blocks which process the information while not changing its dimensionality. In the case of a transformer, the most relevant information can be usually collected either at the first layer (word embeddings) or at the last layer (where full contextual embeddings should have formed). Any layer in between may or may not contain easily processable and therefore useful information. This would be in contrast to the analogy with the training process: during training by gradient descent, the loss is on average decreasing with each iteration, and in theory the network obtained should be better at approximating the function that maps inputs to outputs. If we want a non-monotonic training process we need to consider alternatives to gradient descent, such as simulated annealing.\\n\\n**Comment**\\n\\nWhile the proposed idea is interesting and the additions made in the revision are valuable steps forward, further work is necessary to fully establish the utility and generalizability of the method. Addressing the concerns raised here \\u2013 such as scalability, clarity in notation, and providing actionable insights for practitioners \\u2013 would make the paper much stronger and more impactful for the ML community. I encourage the authors to continue refining their approach, as it has the potential to make a meaningful contribution to the field. Considering all of the above, my revised score would be closer to 3.5-4, but the platform restricts me to choose 3.\"}", "{\"comment\": \"**W11**: We thank the reviewer for this insightful comment. We completely agree with you on the importance of investigating the performance during the test stage. We would like to highlight that model safety and robustness have multiple key elements, with training quality being a cornerstone that significantly influences reliability. In the race to make AI systems more reliable, generalization has taken center stage. But what about ensuring the reliability of the training process itself? To address this very question, this paper focuses on performance during the training stage. Clearly, good training performance is a necessary condition for achieving good test results. That said, we do have results (Fig 4 of Supplemental material) that show the correlation between training quality and test outcomes. We will further include MNIST test results for both training and test stages in the revised manuscript.\\nIt would be highly interesting for future work to further link our training monitoring approach with test-stage monitoring schemes. We have currently included test results alongside the training process to illustrate how test performance varies across different cloud regions.\\n\\n**W12**: Thank you for this insightful comment. We completely agree with you on this point. We applied our YES bounds to the MNIST dataset for a classification task to further evaluate our proposed scheme on real-world and larger datasets. The results is included in the revised manuscript to demonstrate the effectiveness of our method better. In the tasks examined in this paper\\u2014MNIST, image denoising, and the provided synthetic data\\u2014we observed that our bounds remain relevant and effective for assessing the performance of the optimizer. For tasks requiring a sigmoid activation function, we kindly refer you to our response to W9, where we address your concerns about the validity of our bounds in such scenarios.\\n\\n**W13**: Please note that convergence diagnostics consist of two main steps: monitoring the training process and modifying the optimization process if it fails to converge or moves toward suboptimal solutions. In this paper, we focus on monitoring the training process using a color-coded cloud based on the YES bounds. The modification of the training process falls outside the scope of this work at this stage and can be explored in future research.\\n\\n**Q1,2**: Since we are proposing a bound for training process, the intrinsic definition of a bound implies that it should not vary if the initialization and the optimization path are the same (fixed).\\n\\n**Q3**: We refer the reviewer to our response in W4.\\n\\n**Q4**: In these lines, we suggest that there are more effective intermediate sequences than simply projecting each layer directly onto the final output. One possible approach is to leverage training results, as we have done in this paper. Alternatively, using prior knowledge about the network structure or specific application could help establish more refined bounds. Once this information of the structure of intermediate layers is provided, one can utilize the same scheme as YES-k to obtain YES-SIGMA bounds.\\n\\n**Q6**: We kindly refer the reviewer to our response in W7.\\n\\n**Q7**: The primary objective of a NN is to map inputs to $\\\\mathbf{Y}$. A proven method to enhance this process is by deepening the network and introducing nonlinearity, which enables the model to capture complex features more efficiently. Alternatively, a straightforward but less optimal approach is to project each layer directly onto the output, disregarding the benefits of deeper network structures. Interestingly, our findings indicate that the YES-0 bound decreases as the network depth increases, even when each layer is independently projected onto the final output.\"}", "{\"comment\": \"I thank the authors for their detailed clarifications.\\n\\nI continue to be skeptical of the papers contributions. Many of the clarifications are not able to address my concerns sufficiently. For instance, the cited passage from SB 1047 is irrelevant to model training diagnostics and I do not expect legislation on the training loss plateauing in a certain region.\\n\\nI would like to remind the authors that the appendix is still missing.\\n\\nMy main criticisms vis-a-vis the contributions, soundness and strong claims made by the paper remain unaddressed. I will maintain my critical evaluation.\"}", "{\"comment\": \"We sincerely appreciate the time and effort the review panel has dedicated to evaluating our submission, as well as the valuable feedback provided. Your insights have been instrumental in refining our work.\\n \\nIn the race to make AI systems more reliable, the focus has predominantly been on generalization. However, we pose an essential question: how can we ensure reliable AI when the training process itself lacks rigorous evaluation and guarantees? This gap is significant and demands a solution. Our work directly addresses this challenge by introducing a novel, practical framework that enhances the reliability of the training process\\u2014an often-overlooked but critical aspect of building safe and robust AI systems.\\n \\nSpecifically, our contributions include:\\n1. YES training bounds \\u2013 A real-time, data-aware framework to evaluate how well training is progressing.\\n2. A color-coded cloud system \\u2013 Visualize in real-time whether your training is effective (green), needs caution (sub-optimal, yellow), or is outright poor (red).\\n \\nThese contributions go beyond academic curiosity; they represent a practical and impactful solution to a crucial problem. By leveraging our framework, practitioners can:\\n- Monitor training issues as they happen.\\n- Gain actionable insights into training quality.\\n- Defend the training received against regulatory or industry benchmarks for safety and performance.\\n \\nWe have worked to address the reviewer comments and suggestions below and hope that our responses are satisfactory. We are confident that our framework offers an indispensable addition to the AI community, one that meets both the technical rigor and practical utility expected of ICLR contributions.\"}", "{\"comment\": \"**W6**: Heuristic methods in the optimization are sensible and sub-optimal approaches, especially when exact analytical or approximate solutions are unavailable or when a baseline comparison is needed. To evaluate whether an iterative algorithm or non-convex optimizer can outperform these heuristics, we compare their performance against what we call \\\"YES bounds\\\"\\u2014reference solutions derived from heuristics. If the training process fails to surpass the heuristic benchmark, it indicates that the optimization approach may be insufficient or sub-optimal. Therefore, this comparison helps certify whether the training outcome is effectively optimized or remains suboptimal relative to the provided heuristic solution.\\n\\n**W7**: We appreciate the reviewer\\u2019s insightful comments. We would like to highlight that model safety and robustness have multiple key elements, with training quality being a cornerstone that significantly influences reliability. If the training performance is poor, it is unreasonable to expect meaningful or reliable results from the model. In this paper, our primary objective is to monitor the quality of the training process. Leveraging our proposed mathematically grounded framework, we introduce the YES bounds and their associated cloud system, which serve as a sanity check for the optimizer. While we agree with the reviewer that we did not exhaustively show the impact of our bounds on test results, we have results (Fig 4 of Supplemental material) that show the correlation between training quality and test outcomes. We will further include MNIST test results for both training and test stages in the revised manuscript. We appreciate your comments bringing to our attention that without clarifying how central training quality is in model reliability, the contribution may be understood as over-stated. We hope that our clarifications have been helpful. Thank you!\\n\\n**W8**: We appreciate the reviewer's insightful comment. At first we want to point out that our baseline model is not linear. It is true that in order to obtain the weight matrices layer-wise, we overlooked the effect of activation function $\\\\Omega$. However, later the effect of $\\\\Omega$ has been considered in obtaining the intermediate mapping points to respect the architecture of the network as the reviewer can notice in Eqs.~(12) and (14) in the main paper. Regarding the reviewer's comment \\\"A sensible but sub-optimal approach\\\", we would kindly refer the reviewer to our response to the second part of W4. \\n\\n**W9**: We appreciate the reviewer's comment. We should highlight that layerwise OLS solutions are only one component of our framework. The overall machinery is shown to work in the provided examples. Additionally, we note that this is a pioneering work in the arena of non-randomized training quality certification from an optimization perspective, and in fact, we are currently working on more sophisticated projections to enhance the bounds (balancing certification quality and computational cost of certification).\\nIn the revised manuscript, We are including our evaluation of the YES trainingbound framework on MNIST dataset which has produced very interesting results. Regarding your comment \\\"one can trivially, see that this bound becomes vacuous, even for single layer models when replacing ReLU with sigmoid (i.e. regard data-generating model $\\\\sigma(\\\\mathbf{A}\\\\mathbf{X}+\\\\mathbf{e})$ where $\\\\mathbf{A}$ has large values.\\\", we should note that for Sigmoid activation function, our YES bounds framework will slightly change. Note that in obtaining our bounds, we utilized the fixed-point property of $\\\\Omega$, \\n\\n$\\\\|\\\\mathbf{Y}-\\\\Omega(\\\\mathbf{A}\\\\mathbf{X})\\\\|_{\\\\mathrm{F}}^2$\\n\\n$=\\\\|\\\\Omega(\\\\mathbf{Y})-\\\\Omega(\\\\mathbf{A}\\\\mathbf{X})\\\\|_{\\\\mathrm{F}}^2$\\n\\n$\\\\leq\\\\|\\\\mathbf{Y}-\\\\mathbf{A}\\\\mathbf{X}\\\\|_{\\\\mathrm{F}}^2$,\\n\\nwhere in the last step we have utilized the 1-Lipschitz property of the ReLU. Therefore, by minimizing the upper bound with $\\\\mathbf{A}=\\\\mathbf{Y}\\\\mathbf{X}^{\\\\dagger}$, we can minimize the left-hand side of the equation. To extend this result for the Sigmoid activation function $\\\\sigma$, we know that $\\\\sigma$ is one-to-one map. Therefore, there exists $\\\\mathbf{Y}^{\\\\prime}$ such that $\\\\mathbf{Y}=\\\\sigma(\\\\mathbf{Y}^{\\\\prime})$. As a result, we can write\\n\\n $ \\\\|\\\\mathbf{Y}-\\\\sigma(\\\\mathbf{A}\\\\mathbf{X})\\\\|_{\\\\mathrm{F}}^2$\\n\\n$=\\\\|\\\\sigma(\\\\mathbf{Y}^{\\\\prime})-\\\\sigma(\\\\mathbf{A}\\\\mathbf{X})\\\\|_{\\\\mathrm{F}}^2$\\n\\n$\\\\leq\\\\|\\\\mathbf{Y}^{\\\\prime}-\\\\mathbf{A}\\\\mathbf{X}\\\\|_{\\\\mathrm{F}}^2,$\\n\\n where in the last step we have utilized the 1-Lipschitz property of the Sigmoid. Therefore, for the Sigmoid activation function, by minimizing the upper bound with $\\\\mathbf{A}=\\\\mathbf{Y}^{\\\\prime}\\\\mathbf{X}^{\\\\dagger}$, we can minimize the left-hand side of the equation. Clearly, this modified bounding approach for Sigmoid does not become vacuous for large values of $\\\\mathbf{A}$.\"}" ] }
4hdDPa9bpI
Graph Fourier Neural Kernels (G-FuNK): Learning Solutions of Nonlinear Diffusive Parametric PDEs on Multiple Domains
[ "Shane Loeffler", "Zan Ahmad", "Syed Yusuf Ali", "Carolyna Yamamoto", "Dan M. Popescu", "Alana Yee", "Yash Lal", "Natalia Trayanova", "Mauro Maggioni" ]
Understanding and predicting the time-dependent dynamics of complex systems governed by non-linear partial differential equations (PDEs), with varying parameters and domains, is a difficult problem that is motivated by applications in many fields. We introduce a novel family of neural operators based on a Graph Fourier Neural Kernel (G-FuNK), for learning solution generators of nonlinear PDEs with varying coefficients, across multiple domains, for which the highest-order term in the PDE is diffusive. G-FuNKs are constructed by combining components that are parameter- and domain-adapted, with others that are not. The latter components are learned from training data, using a variation of Fourier Neural Operators, and are transferred directly across parameters and domains. The former, parameter- and domain-adapted components are constructed as soon as a parameter and a domain on which the PDE needs to be solved are given. They are obtained by constructing a weighted graph on the (discretized) domain, with weights chosen so that the Laplacian on that weighted graph approximates the highest order, diffusive term in the generator of the PDE, which is parameter- and domain-specific, and satisfies the boundary conditions. This approach proves to be a natural way to embed geometric and directionally-dependent information about the domains, allowing for improved generalization to new test domains without need for retraining. Finally, we equip G-FuNK with an integrated ordinary differential equation (ODE) solver to enable the temporal evolution of the system's state. Our experiments demonstrate G-FuNK's ability to accurately approximate heat, reaction diffusion, and cardiac electrophysiology equations on multiple geometries and varying anisotropic diffusivity fields. We achieve low relative errors on unseen domains and fiber fields, significantly speeding up prediction capabilities compared to traditional finite-element solvers.
[ "Neural Operator", "Graph Neural Networks", "Graph Fourier Transform", "Partial Differential Equations", "Operator Learning", "Cardiac Electrophysiology" ]
Reject
https://openreview.net/pdf?id=4hdDPa9bpI
https://openreview.net/forum?id=4hdDPa9bpI
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xMBsVxlYG0", "r9BuDkNQAW", "ppu84WH4fL", "mjduTCwLX4", "mZ7wxUhnCs", "m8zubSEnNe", "lERqXGtaP4", "jGcLMUQr2Y", "hpuljPZhJ8", "fqEBagVuJ0", "cWwLmLzmNp", "adHBGxfuQK", "VEX31vAIHw", "U6YWcgF7t5", "QVrxbK1BXm", "O7qkGEQGZX", "JuM94f80Fu", "DhWa1D6Wzt", "CVZuliK2Dl", "C33WXbcVOc", "7GnrLWT8zn", "6aqVkmKxEX", "3wZ006TDkc", "2KxNgb1dD8" ], "note_type": [ "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1733178816144, 1733181715269, 1733179085319, 1737523819077, 1732782238254, 1732152737762, 1732102312797, 1732710004371, 1732052028051, 1730281814185, 1732044399675, 1732052869682, 1732632202654, 1734407439369, 1732149708521, 1729560564453, 1730105485675, 1732296599435, 1732297196911, 1730235111147, 1732768988789, 1732191715744, 1732184274183, 1732052333786 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission7131/Authors" ], [ "ICLR.cc/2025/Conference/Submission7131/Authors" ], [ "ICLR.cc/2025/Conference/Submission7131/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission7131/Reviewer_uMgx" ], [ "ICLR.cc/2025/Conference/Submission7131/Authors" ], [ "ICLR.cc/2025/Conference/Submission7131/Reviewer_1MCs" ], [ "ICLR.cc/2025/Conference/Submission7131/Reviewer_uMgx" ], [ "ICLR.cc/2025/Conference/Submission7131/Authors" ], [ "ICLR.cc/2025/Conference/Submission7131/Reviewer_uMgx" ], [ "ICLR.cc/2025/Conference/Submission7131/Authors" ], [ "ICLR.cc/2025/Conference/Submission7131/Authors" ], [ "ICLR.cc/2025/Conference/Submission7131/Reviewer_Hf3Q" ], [ "ICLR.cc/2025/Conference/Submission7131/Area_Chair_FeAw" ], [ "ICLR.cc/2025/Conference/Submission7131/Reviewer_KvZD" ], [ "ICLR.cc/2025/Conference/Submission7131/Reviewer_KvZD" ], [ "ICLR.cc/2025/Conference/Submission7131/Reviewer_1MCs" ], [ "ICLR.cc/2025/Conference/Submission7131/Authors" ], [ "ICLR.cc/2025/Conference/Submission7131/Authors" ], [ "ICLR.cc/2025/Conference/Submission7131/Reviewer_Hf3Q" ], [ "ICLR.cc/2025/Conference/Submission7131/Authors" ], [ "ICLR.cc/2025/Conference/Submission7131/Reviewer_Hf3Q" ], [ "ICLR.cc/2025/Conference/Submission7131/Reviewer_uMgx" ], [ "ICLR.cc/2025/Conference/Submission7131/Authors" ] ], "structured_content_str": [ "{\"comment\": \"Thank you for your comments and interest in our work. To answer your question, we believe the majority of the errors in the cardiac example is due to the complexity of the example and the relatively small dataset (only 24 geometries). We do assume that as we increase the size of the training data that the error in the wavefront will decrease. We observed that with the addition of the H1 norm penalty in the loss function for this example we were able to better predict a sharper wavefront and will explore other penalties in the future. In the future we will explore option 1 that you suggested and we thank you for this suggestion.\"}", "{\"comment\": \"Thank you to reviewer uMgx for their time and effort in reviewing our manuscript. All recommend minor changes should have been reflected in the manuscript.\\n\\nWe would like to add one comment. Since the coefficient of the PDE in the spatial domain are varying (and the solution is space dependent) we cannot expect G-FuNK to extrapolate as well in these out-of-distribution examples. This is something not unique to G-FuNK but to most state of the art data-driven Neural Operators. The novelty of this work is a Neural Operator that can predict on varying coefficients and geometries at the same time something that has not been effectively addressed in the context of diffusive PDEs until now.\"}", "{\"comment\": \"We would like to thank the reviewer for their comments and their time in reviewing our manuscript. Response to your questions are below.\\n\\n-For the heat equation example we used the software Mathematica which takes as an input the continuous domain, PDE and Initial/Boundary condition and solves the PDE to arbitrarily high precision. The rate of convergence is not applicable here since we do not pass in a discretization of the domain. (The numerical convergence examples were previously added to the Appendix)\\n\\n-There are several different errors which accumulate to give the final error (e.g., number of training samples, number of nodes in the graph, optimization error). The approximation of the eigenfunctions via the graph Laplacian is one of these errors and it is not likely that the analytical eigenfunction would improve the error more than the changes to the other factors mentioned above. In addition, our diffusion coefficient is not constant. Therefore, we don\\u2019t think it is feasible to compute the analytical eigenfunctions for each sample and since the purpose of this paper is to predict on anisotropic domains, we do not see the utility in these computations for our purposes.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"I thank the authors for the revision. Just a few minor:\\n\\np.16, line 821-828, reads \\\" [15\\u221230] \\u00d735 cm \\\" or \\\"[15\\u221230] \\u00d7[15\\u221230] cm\\\" Is the notation consistent with the rest of the paper? I'm concerned about the use of hyphen \\\"-\\\" for range. Second, if these are supposed to describe area, shouldn't the unit be also for area, like $cm^2$?\\n\\np.16, line 851, should read \\\"Therefore, we anticipate...\\\" and not \\\"Therefor, we anticipate...\\\".\\n\\np.16 line 851-855, \\\"we anticipate that few-shot learning will be effective with a small fraction of the number of original training geometries included from the out-of-distribution domain to improve the pre-trained model\\u2019s abilities on larger geometries.\\\"\\n\\nI can see why \\\"a small fraction of the number of original training geometries included from the out-of-distribution\\\" may help with prediction. However, it would not be out-of-distribution test anymore. To me, it sounds like the network can only learn to interpolate within data set, and not really learning any underlying physics and the governing equation.\"}", "{\"comment\": \"Thank you once again for these great questions.\\n\\nThere are a few advantages to our approach over traditional spectral methods. Spectral methods suffer when there are very large gradients, sharp changes in parameters and very non-linear reaction terms which occur in the reaction-diffusion and cardiac EP examples. This is why there are currently only FEM solvers openly available for these equations. Since in the proposed method the G-FuNK layer uses a spectral and a local branch (top and bottom parts in the G-FuNK layer) the method is better suited to learn global and local interactions and effectively combines this information to learn the solution generator. Second, while in the examples we are learning solutions to certain PDEs, ultimately our method is data-driven so we can learn the dynamics of anisotropic diffusive systems from data alone without knowing any underlying equations. We have presented a novel neural operator which uses spectral methods that can learn the underlying physics of a system without the need of knowing an underlying system of equation necessary for traditional spectral methods. \\n\\nWe would like to note two things, first, that the FEM software used is highly optimized for solving these sets of equations and second, that for the cardiac EP example the eigendecomposition took less than 9 seconds so G-FuNK with a total wall time of less than 10 seconds would still greatly outperform any traditional solver which takes anywhere from 8 to 22 minutes (depending on size of heart). In cardiac EP, it is common to perform parameter sweeps and test different set-ups (initial conditions) to determine an optimal treatment so being able to perform these parameter sweeps on completely new patients while only needing to perform eigendecomposition once is of great advantage in terms of computational efficacy and work overhead. While there is a computational overhead for initially training the model, the advantages to quickly infer accurate solutions on new unseen domains outweigh the computational demands of training G-FuNK.\\n\\nThere is currently no spectral solving software available to solve the equations used in the reaction-diffusion and cardiac EP examples so unfortunately, we cannot directly compare the efficiency between our method and a spectral solver.\"}", "{\"comment\": \"Thank you for the authors' responses. However, I still have some concerns:\\n\\nThe authors encode the PDE parameters into the edge features of the graph Laplacian, which effectively corresponds to the finite difference discretization of $-div(\\\\kappa \\\\nabla u)$ on regular grids. In this context, the so-called Graph Fourier Neural Kernels are essentially equivalent to a spectral method, so better performance is naturally expected. However, this raises an important question: what is the advantage of using this approach compared to a traditional spectral method, especially since the discretization and computation of eigenvectors are already in place?\\n\\nEfficiency is another significant concern. Could the authors provide more details to justify the computational efficiency of the proposed method?\"}", "{\"comment\": \"I thank the authors for their response and rebuttal.\\n\\n> We can add this discussion to the manuscript but this kind of out-of-distribution domain adaptation was not a primary goal in this work.\\n\\nI think this should be clarified in the paper.\\n\\n> the G-FuNK model used in example 2 was used to predict on these out-of-distribution trajectories and performance did decrease with a rel. l2 error of 21.4%. \\n\\nWhere should I look at in the manuscript? which test case is \\\"example 2\\\"? Where do you show that the rel. error is around 21%?\"}", "{\"comment\": \"We would like to thank the reviewer for their time and effort put into reviewing our manuscript. We addressed their comments.\", \"major\": \"W1) Some works provide theoretical approximation error for vanilla neural operator frameworks on a single domain, however, they do not provide generalization error bounds at all or explain the numerical performance to address. Even regarding theoretical results on approximation error, these generally are quite broad in terms of the class of models that permit and do not have specific rates. Model specific approximation error results are hard to come by, especially on problems defined on a family of domains. \\n\\nW2 and W6) Yes, incorporating a PDE loss, as in PINNs, is possible within this framework, but we chose not to include it to highlight the specific advantages of the G-FuNK layer. Adding a PDE loss would make it challenging to attribute the model's success to our method's novelties and significantly increase training time. For the cardiac EP example, implementing PINNs would be particularly complex due to the need for a local metric tensor or coordinate system on the 3D surface, as Euclidean space alone cannot represent the spatial derivatives. Additionally, parameterizing the multiply connected (N=5) topological domain of the cardiac geometry in $\\\\mathbb{R}^3$ is non-trivial.\\n\\nOur approach is designed as a data-driven framework that can generalize across a family of PDEs without requiring explicit knowledge of the underlying equations. While our training data comes from simulations of known PDEs, the architecture leverages this indirectly, such as by incorporating diffusivity into the weighted graph. In the cardiac EP example, solution errors primarily arise from wavefront time delays, but the overall wavefront form remains physically consistent. PINNs are better suited for cases where physical principles cannot be learned from the data alone, but they may not be ideal for this application due to the challenges outlined above and their computational expense.\\n\\nW3) No, the solution does not need to be smooth. We present in both the reaction-diffusion and cardiac EP examples a solution where the \\u201cwavefront\\u201d is almost discontinuous and the PDE is nonlinear. Examples of this discontinuity are shown in Courtemanche et. al. It is expected that G-FuNK would perform similar to FNO/GEO-FNO in any examples such as the viscous Burgers equation previously shown in other works but with the advantage noted within this paper. We note that G-FuNK can be constructed to be identical to FNO but can also handle other cases. \\n\\nW4) This is a greatly discussed problem with neural ODEs and is an ongoing field of research. It should be noted that while the proposed work uses a neural ODE, G-FuNK can operate to predict the next time step like any other previous works have done. In particular, G-FuNK can be combined with an implicit solver, of course with significantly increased computational cost. \\n\\nW5) We appreciate the reviewer\\u2019s suggestion to compare data generation, training, and prediction times. The primary motivation behind G-FuNK is to accelerate clinical decision-making for new patient-specific geometries of the left atrium, where multiple pacing sites and frequencies must be tested to determine optimal ablation strategies.\\n\\nIn a standard finite element solver workflow, each simulation for a patient-specific geometry takes approximately 13.5 minutes per pacing site. Testing multiple pacing sites for numerous patients quickly becomes computationally expensive. In contrast, G-FuNK requires a one-time offline training cost (1\\u20132 days) on 24 patient geometries with varying pacing sites. Once trained, it can make predictions for new geometries and pacing configurations in seconds, enabling instantaneous parameter sweeps to determine optimal treatment strategies. This one-time effort amortizes over future test cases, significantly reducing the overall computational burden as the model generalizes without requiring retraining.\\n\\nWe will clarify this in the revised manuscript and include a comparison of the computational costs in a clinical workflow, highlighting how G-FuNK achieves cost-effectiveness for large-scale simulation predictions, particularly for cardiac EP.\\n\\nW7) In example 2 and 3, we extrapolate to different domains and in example 1 and 3 we extrapolate to different parameters of the PDE. In fact, example 2 is almost exactly what you suggest with the different domains, but with varying sized 2D rectangles. The test domains for the reaction diffusion and cardiac EP examples are unseen in the training. The cardiac EP example shows a case of extrapolation and domain-adaptation as the test domain is on a never-before-seen geometry with completely different boundaries and domain. We also believe this example shows that it is parameter adaptable since the test geometry has its own unique diffusive field that is different from all the other domains used in training.\\n\\nWill fix minor issues.\"}", "{\"summary\": \"The authors proposes an interesting surrogate model that combines Graph as the discretization method for Fourier neural operators. I believe the manuscript may be accepted after a major revision.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The paper proposes merging graphs within the standard surrogate models, which allows estimating the solution of 2nd order PDEs with varying coefficients on complicated geometries.\", \"weaknesses\": \"I believe there are several important details missing in the manuscript. Below, I list them.\", \"questions\": [\"**Major:**\", \"Is there an error estimator in the prediction? How can one trust the outcome of fitted G-FUNK model for an unseen problem?\", \"Is there a way to enforce conservation laws (or some notion of structure preservation) in prediction if the underlying PDE admits such constraint?\", \"Does the solution/data need to be smooth? Can you try out viscous burger's equation with emerging discontinuity as viscosity goes to zero?\", \"Is the outcome ODE stable? Is there a guarantee on stability of ODE?\", \"Can you compare the Data-Generation/Training/Prediction time of proposed method versus the ones from a standard finite element/volume solver in the presented test cases? I believe comparing only the time/complexity of prediction against standard solver is very much misleading.\", \"If we know the underlying PDE, wouldn't it make sense to incorporate that in the loss, similar to what PINN does?\", \"Can the author show a case of extrapolation? To me, similar to PINNs, the proposed method can only be used as an efficient interpolator within the space of training data. This makes me doubt the claims on \\\"parameter and domain-adaptation\\\". For example, if you train your model to estimate heat equation in 1d problem inside the domain [0,1], can you test it on domain [-10,10] with different boundary conditions?\", \"**Minor:**\", \"Abstract: \\u201c\\u2026 for which the highest-order term in the PDE is diffusive...\\u201d What does it mean? Do you mean the highest-order term is even or, it has to be 2? I\\u2019m guessing second-order PDEs, which needs to be clarified in Abstract.\", \"Abstract: \\\"without the need for retraining\\\" and not \\\"without need for retraining\\u201d.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for reviewing our manuscript. We have address your comments below,\", \"weaknesses\": \"1.\\tThe limitations should be addressed.\\n2.\\tThe so-called graph fourier transform is actually spectral method, which need the eigenvectors pre-calculated first. This procedure could make the method not useful for large scale problem. Computational efficiency and scalability should be reported, including offline computational for the eigenvectors and online computation time. Comparison with FNO and GNN in terms of efficiency is also absent.\\n\\nThank you for this point, we agree that as the problem goes to a larger scale, the calculation of eigenvectors becomes more difficult. We report on line 212 that the computational complexity is of the order $O(k_{max}^2 n j)$ for computing the $k_{max}$ lowest eigenvectors of an undirected graph. Additionally, we are able to use a subsampled domain and predict the solution of a higher resolution domain which shows the advantage of using the global eigenvectors. The computation of the eigenvectors for our case takes on average 8.58 seconds for the 200 lowest modes for each of the 25 cardiac atrial geometries (approximately 20k nodes in each graph). We report the online computation time for the predictions to be less than 1 second for the cardiac ep examples, in contrast to the numerical simulation time of anywhere from 8-22 mins per example. All three networks were similar in efficiency in that both training time and inference were similar and a discussion will be added to the paper to note this.\\n\\n\\n3.\\tThe novelty of graph fourier transform is limited. This is a widely studyed area in graph neural network community.\\n\\nWhile we agree with the ubiquity of GFT, we propose that the novelty in our method comes from representing the PDE learning problem using the spectrum of the weighted Graph\\u2019s Laplacian for a more generalizable FNO that can also incorporate anisotropic fiber fields into the edges naturally (so that their effects on the solution field do not have to be directly learned by the network). Our comparisons demonstrate that for diffusive PDEs, using this approach instead of message passing GNN based methods is advantageous for including the fiber fields without increasing the number of input parameters in our network. In this regard, G-FuNK is very light weight.\", \"questions\": \"1.\\tWhy do the authors use the neural ODE model? Any gain from the specific model?\\n\\nYes, we use the neural ODE framework for a few reasons. First, learning $\\\\frac{du}{dt}$ via a neural ODE allows for easily applying Dirichlet boundary conditions since before integrating the points could simply be set to a value. Second, in cardiac electrophysiology (and many other systems) it is common to apply external stimulus at different timepoints so being able to arbitrarily apply a \\u201cnon-learned\\u201d external stimulus is of a great advantage. Third, not being constrained to a constant time step is an advantage since at different time points within a simulation a smaller time step might be necessary for better stability. While using a neural ODE framework is not necessary in all cases, we believe it provides numerous advantages compared to the standard methods. We can also use different time stepping methods (implicit schemes for improved stability, higher order methods when necessary).\"}", "{\"comment\": \"Thank you for your review of our manuscript. We appreciate your input. Below we have responded to the comments in the review.\\n\\n-We will add to the manuscript the requirement that the domains $\\\\Omega_{\\\\alpha}$ must be diffeomorphic (we certainly do not need conformal equivalence and do not have conformal equivalence in any of the examples). Thank you for pointing out this missing detail. The domains do indeed need to be diffeomorphic. We do not provide a theoretical justification, based on the current state of the literature on operator learning theory, most results are for vanilla frameworks on fixed domains. There are rarely any theoretical guarantees for approximation error on model-specific variants of the canonical operator learning problem at least that we are aware of. In the simplest case, our method reduces to a standard Fourier Neural Operator for which there are universal approximation guarantees by Kovachki et al., but we can of course handle more general domains than just a regular square grid. \\n\\n-In the case of the heart surface, the PDE is being solved by finite elements on the surface and the anisotropic Laplacian term is defined on the surface, which is precisely what we aim to approximate with our graph Laplacian on the surface. The main point of G-FuNK is that the canonical Fourier construction is replaced by the graph Laplacian which adapts to both the diffusivity fields and the surface on which they are defined. The specific architecture is an analog to the Fourier Neural Operators. \\n\\n-The PDE solver for the 3D Cardiac EP example, which is an open-source fully optimized finite element library for these problems, takes roughly 13 minutes to execute for a single set of initial conditions. For applications of these models, one is interested in doing parameter sweeps across initial conditions and stimulus to find some optimal treatment procedures for patients. It is unclear to us what the reviewer means by these being low dimensional problems. To our knowledge, the literature on neural operators for PDEs seems to be primarily focused on 2D and 3D examples, which is what we provide benchmarks for.\"}", "{\"comment\": \"Following up your answers. For part a) What about the rate for standard solver?\\n\\nAs for part b), let me clarify my question using a simple geometry. Suppose your domain is a 2D-sphere embedded in R^3. Given uniform sampling data, we understand that the eigenfunctions of Laplace-Beltrami (which is what you are trying to estimate with the Graph-Laplacian framework) are simply spherical harmonics. Now, you are given a randomly sampled data lie on the sphere, and again, my understanding is that you are using the Graph-Laplacian to ultimately approximate spherical harmonics and subsequently employ G_Funk on data expressed in these coordinates. Intuitively, we expect the best performance is achieved when you are using the analytic spherical harmonics since your estimated spherical harmonics are subjected to errors (for spectral convergence error rate, see e.g. Belkin and Niyogi, or recent papers by Garcia-Trillos and Jeff Calder groups). What my question was: Can you verify numerically if you use the estimated eigenvectors, your results will converge to the result using analytic eigenfunctions, when both are using the same truncated eigenmodes.\"}", "{\"metareview\": \"The paper introduces a novel family of neural operators, termed Graph Fourier Neural Kernels (G-FuNK), designed to learn the temporal dynamics of diffusive PDEs across multiple anisotropic domains with varying parameters. The method embeds geometric and directional information about the domains by combining graph Laplacian-based constructions for domain-specific components with non-adapted components learned from training data using FNO. Additionally, an integrated ODE solver is used to capture the system\\u2019s time evolution.\\n\\nThe paper presents an interesting approach and some compelling results, although a number of weaknesses were raised .First, there is a lack of theoretical justification for why the proposed method should work in general, particularly regarding the interplay between graph-based Laplacian constructions and the learned Fourier components. A clearer convergence analysis or performance comparison with standard methods (eg. FEM, etc..) would be necessary to substantiate some of the claims made in the paper. Second, reviewers noted that the paper lacks a quantitative understanding of the limitations of the Graph Laplacian FFT, which is central to the method. This raises concerns about scalability, stability, and general applicability beyond the presented examples. Finally, there is limited analysis regarding the impact of architectural choices (e.g., increasing the number of layers or parameters) on the method\\u2019s performance.\\n\\nThe panel ultimately recommends rejection. While the method demonstrates potential for domain-specific applications, the lack of theoretical justifications, convergence analysis, and scalability experiments limits its impact for the broader ICLR audience. That said, the contributions could be valuable in application-focused venues where solving PDEs on anisotropic domains is of high practical importance. The authors are encouraged to further develop the theoretical underpinnings of the approach, provide comprehensive comparisons to standard methods, and explore more comprehensively the scalability of the proposed method. With these improvements, this work could make a meaningful contribution to both the scientific computing and machine learning communities.\", \"additional_comments_on_reviewer_discussion\": \"During the rebuttal period, the authors primarily focused on clarifying the motivations behind their method, elaborating on some of the design choices, and explaining how the proposed approach complements traditional methods such as finite element methods (FEM) and spectral methods. While these clarifications provided helpful insights into the intent and applicability of the work, they did not fully address the concerns regarding theoretical justification, convergence analysis, and quantitative comparisons to standard solvers.\"}", "{\"comment\": \"Thanks for the clarification.\\n\\nI understand the usual argument that \\\"evaluating NO is very fast (about 1 second). The training time (about 1 day) can be amortized in many-query settings\\\". I think a more informative comparison could strengthen the argument. Specifically:\\n- Solve the PDE numerically with high accuracy (about 13min). \\n- (Option 1) Reduce the mesh size such that the numerical solution have the same accuracy as the NO, what is the runtime?\\n- (Option 2) Solve the linear system in the same reduced mode as the NO, what is the accuracy and time?\\n\\nIn Figure 4, the traveling wave in the target solution has a much sharper transition region, and the peak of the solution is noticeably red (>10mV), but the prediction is has wider transition region and smaller magnitude (orange, <-10mV). Overall, the NO prediction seems more diffusive. What might be the reason? How to improve the lag of the wavefront (1.62 ms) ? \\n\\nI find this work interesting compared to many existing neural operator studies, as it tackles challenging 3D examples. Given the complexity of the problem, I see this as a proof of concept with potential for further development. That said, since I found myself draw to the cardiac EP example, but it is not my area of expertise, so I\\u2019ll lower my confidence level to 3.\"}", "{\"summary\": \"This paper proposes Graph Fourier Neural Kernels (G-FuNK) for learning solution generators of time-dependent partial differential equations (PDEs) on graphs. G-FuNK aims to be \\\"geometry-invariant\\\" by leveraging the spectral domain of graphs through the Graph Fourier Transform (GFT), similar to how the Fourier Neural Operator (FNO) achieve \\\"discretization-invariance\\\" in regular domain.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The writing is clear, and the combination of graph neural networks (GNNs) and FNO is well motivated and novel to the best of my knowledge. The numerical examples show a gradual increase in complexity, leading to the final example on cardiac electrophysiology, which is complex and demonstrates the practical utility of the proposed framework.\", \"weaknesses\": \"In Table 1, for the three methods (G-FuNK, FNO, GNN), the number of parameters can differ by orders of magnitude, making it challenging to evaluate the improvement in performance.\\n\\nThe statement \\\"Our method predicts entire trajectories in under 1 second, significantly outperforming traditional numerical methods. For example, cardiac EP simulations typically take at least 15 minutes on 12 CPU cores for one given set of initial conditions.\\\" could be more precise, particularly regarding the numerical solution being compared. It seems the cardiac EP simulations from numerical methods serve as the high-fidelity solutions that are used to generate the training data and evaluate the error. However, G-FuNK learns on reduced modes (k_max), which may lead to limited accuracy. It would be more informative if the numerical solutions are computed on coarser mesh that achieve similar of accuracy as G-FuNK, or using the reduced modes.\", \"questions\": \"Given that the paper focuses on PDEs on graphs, it seems that the \\\"Multipole Graph Neural Operator for Parametric Partial Differential Equations\\\" (MGKN), which claims to be mesh-invariant, would be a more suitable baseline for comparison. While the original MGKN framework does not explicitly tackle changing geometry, it seems it can still be applied in this setting, since only the graph is used as an input. There are also many similarities between these two approaches: the Fourier transform and inverse Fourier transform in G-FuNK play a role similar to kernel convolutions in MGKN, which are computed using the multipole algorithm. Additionally, both methods employ some form of truncation to make computation more tractable (e.g., limiting modes or long-range interactions). I would appreciate further discussion or comparison of these two approaches.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper proposed Graph Fourier Neural Kernels (G-FuNK), which aim to solve time-dependent, nonlinear partial differential equations (PDEs) with varying parameters and domains. G-FuNK leverages parameter- and domain-adapted spectral method. These operators are particularly well-suited for problems involving anisotropic diffusion and complex geometries. The paper demonstrates G-FuNK's effectiveness on several applications, such as heat equation simulations, reaction diffusion equations, and cardiac electrophysiology, showing promising results in terms of accuracy and computational speed.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. G-FuNK leverages parameter- and domain-adapted spectral method such that it is well-suited for problems involving anisotropic diffusion and complex geometries.\\n\\n2. The application on Cardiac Electrophysiology is very interesting.\\n\\n3. The paper presents a detailed comparison with FNO and GNN methods, showing that G-FuNK can outperform these methods\", \"weaknesses\": \"1. The limitations should be addressed.\\n\\n2. The so-called graph fourier transform is actually spectral method, which need the eigenvectors pre-calculated first. This procedure could make the method not useful for large scale problem. Computational efficiency and scalability should be reported, including offline computational for the eigenvectors and online computation time. Comparison with FNO and GNN in terms of efficiency is also absent.\\n\\n3. The novelty of graph fourier transform is limited. This is a widely studyed area in graph neural network community.\", \"questions\": \"1. Why do the authors use the neural ODE model? Any gain from the specific model?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Once again we thank the reviewer for their valuable input and have provided responses below.\\n\\nQ1) As of point 3, I think solving PDE of low dimensional is usually has 1-3 dimensional spatial domain, otherwise it is high. One would consider NN when the PDE is high-dimensional since the classical method is subjected to curse of dimension. However, if one applies NN method to low-dimensional problems and compare to classical methods, the accuracy is usually poor (as shown in the Table 1 with error of 10%). \\nWhile theoretical convergence is not available, can you numerically show some convergence for one of the examples, let's just pick the simplest example (2D heat equation). For example, check the numerical convergence rate as a function of training data and compare it with classical PDE solvers.\\n\\nA)\\tThank you for this suggestion, we have compared the numerical convergence for the heat example shown in example 1. It follows as so,\\n\\nNumber of samples\\t\\n\\n50, 100, 250, 500, 750\\n\\nRel. l2 error\\n\\n.1001, .0794, .0641, .0357, .0292\\n\\nSo, it does seem that we have convergence as we increase the number of samples as expected. These results will be added to the manuscript. We do expect for example 2 and 3 that as we increase the amount of data that the error will decrease as we see in this numerical convergence test. For example 3 we presume that the 24 geometries we trained on may not be expressive enough for the distribution of left atrial geometries but generating these meshes and fiber fields (diffusion fields) are time consuming so we restricted ourselves to 24 (+1 test).\\n\\nQ2) Since you clarify that the key idea is replacing FFT with a transformation that maps to eigenvectors of Graph Laplacian as coordinates, if the domain is simple like 2D-box, one can indeed solve a Sturm-Liouville eigenvalue problem to attain analytic eigenfunctions (which is what you are trying to estimate with Graph Laplacians) such that you can replace FFT with an expansion over these eigenfunctions. I would expect that this will be the upper limit of the performance? Can you run a numerical experiment to confirm this?\\n\\nI believe these few numerical tests would clarify the advantage and limitations of the proposed schemes.\\n\\nA) The set of eigenfunctions is complete in L^2 (and therefore in subspaces of smoother functions, e.g. Sobolev spaces) and therefore its use does not limit the performance of our method. Of course, given the finite amount of data, only a finite dimensional space spanned by a finite number K of (lowest frequency) such eigenfunctions is used: this creates a bias in the model, which can be made arbitrarily small upon increasing K, but increasing K increases the variance in our estimators, so that a finite optimal K (which could be obtained upon cross validation) can be chosen. We can add this discussion to the manuscript.\"}", "{\"comment\": \"We thank this reviewer for the additional time they have spent reviewing our work. It is very appericated. Below we have address the previous comments.\\n\\nQ1) ... Our approach is designed as a data-driven framework that can generalize across a family of PDEs without requiring explicit knowledge of the underlying equations....\\n\\nI see the motivation for data-driven estimator. However, in prediction we would like to guarantee some physical constraint, like mass, momentum, or energy conservation. Even having a PINN loss would not guarantee physical constraints in prediction. However, if you consider standard finite volume/element method, they guarantee some notion of conservation by design regardless of the underlying PDE. The proposed method does not seem to answer this doubt.\\n\\nA1) We agree with your statement, and we currently do not implement any such conservations in our network. In current literature, most Neural Operators do not enforce these constraints but this is a new growing topic in the community. This is a future direction we would like to explore, and we thank you for your comment. \\n\\nQ2) In fact, example 2 is almost exactly what you suggest with the different domains, but with varying sized 2D rectangles....\\nCan you please clarify the setting of example 2? What is the domain that is trained on, and what is the domain that is tested on? The text is a bit confusing to me. Are you training on a smaller domain and testing on a larger domain? Or you are considering the same domain size, by varying mesh spacing (which would make it an interpolation)?\\n\\nA2) In the second example we trained on multiple trajectories where each trajectory had a rectangular domain where the edge lengths were independently chosen in the range [15, 30]x[15,30] so each trajectory within the dataset has an associated rectangular domain that is randomly chosen (each $\\\\Omega_\\\\alpha$ is different). Likewise, in the test trajectories, each domain is rectangle and independently chosen in the same range [15, 30]x[15,30] but each trajectory has a unique domain and initial condition. So altogether, all trajectories in both the training and test sets have different domains which is why we claimed the method is domain adaptable.\\n\\nWe apologize as we believe we misinterpreted the original question. We would like to note that accuracy in extrapolation is not guaranteed, which is in fact a rather typical phenomenon in many numerical techniques, including Neural Operators. We generated an out-of-distribution test trajectories for the random rectangle example where each edge of the rectangle were randomly picked from the ranges [15, 30]x35 (so that one length is in distribution and the other well outside the training distribution) and without retraining, the G-FuNK model used in example 2 was used to predict on these out-of-distribution trajectories and performance did decrease with a rel. l2 error of 21.4%. The majority of the error is associated with a slightly faster wavefront in the prediction compared to the target. While the performance did decrease in an out-of-distribution test case, we still believe that G-FuNK is parameter and domain-adaptable since it can generalize very well to domains and parameters that are in distribution. We can add this discussion to the manuscript but this kind of out-of-distribution domain adaptation was not a primary goal in this work.\"}", "{\"summary\": \"This paper proposes a neural network model to learn a solution operator of time-dependent second-order semi-linear PDEs that takes diffusion tensor and random sample points of a family of domains as inputs.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The numerical demonstration is the cardiac EP example is interesting since I understood the least. I will rely on an expert in this area to give a meaningful comment on this example.\", \"weaknesses\": \"This paper proposes a neural network model to learn a solution operator of time-dependent second-order semi-linear PDEs that takes diffusion tensor and random sample points of a family of domains as inputs.\\nWhile the first input has been considered by many authors, the difficulty here is to allow the domain to be chosen from a family of domains, denoted by $\\\\\\\\{\\\\Omega_\\\\alpha\\\\\\\\}_{\\\\alpha\\\\in \\\\mathcal{A}}$. My first thought is whether the setup makes sense since there is no discussion on the class of domains that are imposed. I believe this won't work on arbitrary classes of domains. For example, e.g., for Riemannian manifolds, I would believe that if any pair of manifolds in this class have Riemannian metrics that are diffeomorphic (or even stronger, such as conformally equivalent), then the learning problem makes sense. One would need a notion of continuity between any pair of domains in the class, otherwise, it is not feasible to interpolate (to have a map that can interpolate between the training domains). In the numerical examples shown in the paper, there is an affine transformation between any pair of arbitrary side lengths in the 2D Nonlinear Reaction-Diffusion. For the Cardiac Electrophysiology, although the measured data come from 25 patients, the PDE is solved on processed domains (finally 24 of these), and I suspect that these domains are diffeomorphic.\\n\\nThe only interesting numerical demonstration is the cardiac EP example since I understood the least. I will rely on an expert in this area to give a meaningful comment on this example. In terms of methodology, I cannot understand why the Graph Laplacian structure is helpful, unless when the derivatives in Eq. (2) are defined with respect to the Riemannian metric of the embedded manifolds. It is also not obvious to me why the construction of the G-FUNK layer should be a way to go since I cannot reason it from any basic principle. \\n\\nWhile some numerical results look interesting, I don't really understand why the approach should work in general due to the lack of theoretical justification. I am also not sure how the approach behaves if one increases the number of layers or parameters in each G-FUNK layer. Finally, the three numerical examples are low-dimensional problems (2D or 3D), I would naively believe the standard PDE solvers should be able to solve the problem accurately in a reasonable time. Solutions to these (FEM in the cardiac EP example) are being used to train the G-FUNK model. Based on these concerns, I believe this paper is technically (or mathematically) not interesting and not suitable for publication in ICLR. I would urge the authors to consider submitting this work to a domain science journal that is relevant to personalized cardiac EP.\", \"questions\": \"NA\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We thank the reviewer for their time and effort in reviewing our manuscript. We agree that this should be added to the manuscript to clarify the limitations of G-FuNK. We have address this at the end of the discussion section.\\n\\nWe have added to the manuscript and we specifically mention that G-Funk is only predicting on in-distribution samples and reference to the Appendix section where we added the example of the out-of-distribution test as the reviewer suggested.\"}", "{\"comment\": \"Thank you for clarifying. As of point 3, I think solving PDE of low dimensional is usually has 1-3 dimensional spatial domain, otherwise it is high. One would consider NN when the PDE is high-dimensional since the classical method is subjected to curse of dimension. However, if one applies NN method to low-dimensional problems and compare to classical methods, the accuracy is usually poor (as shown in the Table 1 with error of 10%).\\n\\nWhile theoretical convergence is not available, can you numerically show some convergence for one of the examples, let's just pick the simplest example (2D heat equation). For example, check the numerical convergence rate as a function of training data and compare it with classical PDE solvers.\\n\\nSince you clarify that the key idea is replacing FFT with a transformation that maps to eigenvectors of Graph Laplacian as coordinates, if the domain is simple like 2D-box, one can indeed solve a Sturm-Liouville eigenvalue problem to attain analytic eigenfunctions (which is what you are trying to estimate with Graph Laplacians) such that you can replace FFT with an expansion over these eigenfunctions. I would expect that this will be the upper limit of the performance? Can you run a numerical experiment to confirm this?\\n\\nI believe these few numerical tests would clarify the advantage and limitations of the proposed schemes.\"}", "{\"comment\": \"I thank the authors for their responses. They clarified some of my doubts, but I still have a few remaining issues.\\n\\n> W2 and W6) ... Our approach is designed as a data-driven framework that can generalize across a family of PDEs without requiring explicit knowledge of the underlying equations....\\n\\nI see the motivation for data-driven estimator. However, in prediction we would like to guarantee some physical constraint, like mass, momentum, or energy conservation. Even having a PINN loss would not guarantee physical constraints in prediction. However, if you consider standard finite volume/element method, they guarantee some notion of conservation by design regardless of the underlying PDE. The proposed method does not seem to answer this doubt.\\n\\n> In fact, example 2 is almost exactly what you suggest with the different domains, but with varying sized 2D rectangles....\\n\\nCan you please clarify the setting of example 2? What is the domain that is trained on, and what is the domain that is tested on? The text is a bit confusing to me. Are you training on a smaller domain and testing on a larger domain? Or you are considering the same domain size, by varying mesh spacing (which would make it an interpolation)?\"}", "{\"comment\": \"Thank you so much for your support of our work. Below are our responses to weaknesses and questions.\", \"weaknesses\": \"In Table 1, for the three methods (G-FuNK, FNO, GNN), the number of parameters can differ by order of magnitude, making it challenging to evaluate the improvement in performance. \\n\\nThank you for this observation, we would like to note that this was something we tried to work around but there were a few issues. First, for a FNO with a single Fourier layer, with a width of 50, there are already over 500k parameters. Additionally, for the GNN model, there was a bottle neck for the model to fit on the GPU while training and in addition we observed that making the GNN model larger only marginally improved its performance.\\n\\nThe statement \\\"Our method predicts entire trajectories in under 1 second, significantly outperforming traditional numerical methods. For example, cardiac EP simulations typically take at least 15 minutes on 12 CPU cores for one given set of initial conditions.\\\" could be more precise, particularly regarding the numerical solution being compared. It seems the cardiac EP simulations from numerical methods serve as the high-fidelity solutions that are used to generate the training data and evaluate the error. However, G-FuNK learns on reduced modes ($k_{max}$), which may lead to limited accuracy. It would be more informative if the numerical solutions are computed on coarser mesh that achieve similar of accuracy as G-FuNK, or using the reduced modes.\\n\\nThis value should be more precise, the issue with giving a precise number is that the simulation time changes from patient to patient (i.e. bigger heart is a larger mesh and can take longer). A more precise answer is an average of 13.2 minutes ranging from 8.12 to 21.97 minutes. This information will be added to the manuscript.\", \"questions\": \"Given that the paper focuses on PDEs on graphs, it seems that the \\\"Multipole Graph Neural Operator for Parametric Partial Differential Equations\\\" (MGKN), which claims to be mesh-invariant, would be a more suitable baseline for comparison. While the original MGKN framework does not explicitly tackle changing geometry, it seems it can still be applied in this setting, since only the graph is used as an input. There are also many similarities between these two approaches: the Fourier transform and inverse Fourier transform in G-FuNK play a role similar to kernel convolutions in MGKN, which are computed using the multipole algorithm. Additionally, both methods employ some form of truncation to make computation more tractable (e.g., limiting modes or long-range interactions). I would appreciate further discussion or comparison of these two approaches.\\n\\nThank you for this question. The MGKN relies upon message passing which uses an aggregation framework of the edges that was observed to inhibit a GNN to learn the effects of anisotropic diffusion on a traveling wave. We discuss this within the text on lines 132-134 and furthermore the GNN we presented as a comparison also uses a message passing framework, further presenting the need to develop a graph-based network that can better handle anisotropic data. The examples in that paper were only on a 1D problem and a 2D linear problem. We were not able to figure out a feasible way to adapt their code for 3D time-dependent dynamics. Additionally, with their method, it is difficult and sensitive problem to obtain block low rank multiscale matrices with nonideal partitions of the surface.\"}" ] }
4hPwLg7zD3
Fourier Head: Helping Large Language Models Learn Complex Probability Distributions
[ "Nate Gillman", "Daksh Aggarwal", "Michael Freeman", "Chen Sun" ]
As the quality of large language models has improved, there has been increased interest in using them to model non-linguistic tokens. For example, the Decision Transformer recasts agentic decision making as a sequence modeling problem, using a decoder-only LLM to model the distribution over the discrete action space for an Atari agent. However, when adapting LLMs to non-linguistic domains, it remains unclear if softmax over discrete bins captures the continuous structure of the tokens and the potentially complex distributions needed for high quality token generation. We introduce a neural network layer, constructed using Fourier series, which we can easily substitute for any linear layer if we want the outputs to have a more continuous structure. We perform extensive analysis on synthetic datasets, as well as on large-scale decision making and time series forecasting tasks. We also provide theoretical evidence that this layer can better learn signal from data while ignoring high-frequency noise. All of our results support the effectiveness of our proposed Fourier head in scenarios where the underlying data distribution has a natural continuous structure. For example, the Fourier head improves a Decision Transformer agent's returns across four benchmark Atari games by as much as 377\%, and increases a state-of-the-art times series foundation model's forecasting performance by 3.5\% across 20 benchmarks unseen during training. We release our implementation at https://nategillman.com/fourier-head
[ "LLM", "Fourier", "smooth function", "multi-class classification" ]
Accept (Poster)
https://openreview.net/pdf?id=4hPwLg7zD3
https://openreview.net/forum?id=4hPwLg7zD3
ICLR.cc/2025/Conference
2025
{ "note_id": [ "znKDT2IKbh", "yGKmUnxZCB", "wR4OUddhUA", "vG83JNd5kg", "u6GJbrgQo8", "tMSGX335FF", "sgRr8ygXKH", "sHB7aga0EA", "r5U0tdrYm6", "pGh5HP6k60", "oQJiPRFU0j", "lSBYSSWFXs", "l8nGfhqgwg", "kOFvJVEmMk", "kKlqFrXkgO", "hNEyfNnxm8", "gJpoHVo9Iz", "fd52UAddlp", "eUzBlzo2f8", "aCjPmrtKRn", "ZJbDqoqRai", "Z9Bp3ex1rH", "YUPajhJsjC", "Y76c1XGhxV", "WBR2O5KuQM", "Uh2jdbSoRg", "TNRAnHBLVO", "THlYet6LnH", "OLwl4O2PGH", "OJhIqSSSxQ", "Nu2NqhjjRT", "MmCR0CbSNP", "M4CXaYwgMC", "IxKsyJX1Yx", "HH5dOv47eG", "FvWrfvWH1e", "FHbXbrqimB", "CE9eEqS96X", "BonP0FB7Oo", "AlTsxaaXgc", "AMIiezcO0I", "A85Sd9id4U", "9keu8JSw7F", "7M1i6GjCMS", "71XumHOusu", "71KpEhCGGX", "3zI8CXATGu", "39c7sjzQnK", "2t6YLOACKS", "0nFAm0e9mN", "0T1L2vELiK" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1733243424690, 1732199875043, 1732648832420, 1732196380600, 1730389025633, 1733243286658, 1732202186028, 1732195229600, 1732657206995, 1732193318996, 1733107161972, 1732206888339, 1732585560369, 1732194566704, 1732611696532, 1733243357923, 1732202097351, 1732207639180, 1732208234198, 1732196065835, 1732198115426, 1732200651936, 1732570396419, 1732201999232, 1732193969822, 1732193551802, 1732756540605, 1729399804119, 1732476796121, 1732570628870, 1735413699896, 1732194325906, 1732570736340, 1737523416364, 1732207673800, 1733146653445, 1730612001071, 1732570690982, 1730710864043, 1732203459377, 1732205909634, 1732205689259, 1732574027488, 1732195458699, 1732201137236, 1732198704695, 1733243221098, 1732551730365, 1732658405233, 1732648311322, 1732196660101 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission818/Authors" ], [ "ICLR.cc/2025/Conference/Submission818/Authors" ], [ "ICLR.cc/2025/Conference/Submission818/Authors" ], [ "ICLR.cc/2025/Conference/Submission818/Authors" ], [ "ICLR.cc/2025/Conference/Submission818/Reviewer_aPiu" ], [ "ICLR.cc/2025/Conference/Submission818/Authors" ], [ "ICLR.cc/2025/Conference/Submission818/Authors" ], [ "ICLR.cc/2025/Conference/Submission818/Authors" ], [ "ICLR.cc/2025/Conference/Submission818/Reviewer_WB6H" ], [ "ICLR.cc/2025/Conference/Submission818/Authors" ], [ "ICLR.cc/2025/Conference/Submission818/Authors" ], [ "ICLR.cc/2025/Conference/Submission818/Authors" ], [ "ICLR.cc/2025/Conference/Submission818/Reviewer_WB6H" ], [ "ICLR.cc/2025/Conference/Submission818/Authors" ], [ "ICLR.cc/2025/Conference/Submission818/Reviewer_qzaH" ], [ "ICLR.cc/2025/Conference/Submission818/Authors" ], [ "ICLR.cc/2025/Conference/Submission818/Authors" ], [ "ICLR.cc/2025/Conference/Submission818/Authors" ], [ "ICLR.cc/2025/Conference/Submission818/Authors" ], [ "ICLR.cc/2025/Conference/Submission818/Authors" ], [ "ICLR.cc/2025/Conference/Submission818/Authors" ], [ "ICLR.cc/2025/Conference/Submission818/Authors" ], [ "ICLR.cc/2025/Conference/Submission818/Authors" ], [ "ICLR.cc/2025/Conference/Submission818/Authors" ], [ "ICLR.cc/2025/Conference/Submission818/Authors" ], [ "ICLR.cc/2025/Conference/Submission818/Authors" ], [ "ICLR.cc/2025/Conference/Submission818/Authors" ], [ "ICLR.cc/2025/Conference/Submission818/Reviewer_WB6H" ], [ "ICLR.cc/2025/Conference/Submission818/Area_Chair_oxft" ], [ "ICLR.cc/2025/Conference/Submission818/Authors" ], [ "ICLR.cc/2025/Conference/Submission818/Area_Chair_oxft" ], [ "ICLR.cc/2025/Conference/Submission818/Authors" ], [ "ICLR.cc/2025/Conference/Submission818/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission818/Authors" ], [ "ICLR.cc/2025/Conference/Submission818/Authors" ], [ "ICLR.cc/2025/Conference/Submission818/Reviewer_7iR1" ], [ "ICLR.cc/2025/Conference/Submission818/Authors" ], [ "ICLR.cc/2025/Conference/Submission818/Reviewer_qzaH" ], [ "ICLR.cc/2025/Conference/Submission818/Authors" ], [ "ICLR.cc/2025/Conference/Submission818/Authors" ], [ "ICLR.cc/2025/Conference/Submission818/Authors" ], [ "ICLR.cc/2025/Conference/Submission818/Reviewer_aPiu" ], [ "ICLR.cc/2025/Conference/Submission818/Authors" ], [ "ICLR.cc/2025/Conference/Submission818/Authors" ], [ "ICLR.cc/2025/Conference/Submission818/Authors" ], [ "ICLR.cc/2025/Conference/Submission818/Authors" ], [ "ICLR.cc/2025/Conference/Submission818/Reviewer_7iR1" ], [ "ICLR.cc/2025/Conference/Submission818/Reviewer_qzaH" ], [ "ICLR.cc/2025/Conference/Submission818/Authors" ], [ "ICLR.cc/2025/Conference/Submission818/Authors" ] ], "structured_content_str": [ "{\"title\": \"Following up\", \"comment\": \"We just wanted to follow up, and say thank you for your effort reviewing our paper, and for your engagement throughout the rebuttal period!\"}", "{\"title\": \"Author response\", \"comment\": \"## Q2: how to demonstrate more empirical impact?\\n\\nTo strengthen our empirical contribution, we have added results for three more Atari games. **In addition to our previously reported results on Seaquest, the paper now includes results on BankHeist, DoubleDunk, and Gravitar, which demonstrate that the Fourier head significantly outperforms the baseline.**\\n\\nIn this table, we report normalized returns (mean $\\\\pm$ standard deviation, averaged over four seeds) for the Decision Transformer agent across the four Atari games. The results show that the Fourier agent obtains higher returns than the Linear agent across all games.\\n\\n| Classification Head | BankHeist | DoubleDunk | Gravitar | Seaquest |\\n|-------------------|------------|-------------|-----------|-----------|\\n| Linear head |-0.09 $\\\\pm$ 0.05 | -72.72 $\\\\pm$ 33.08 | 1.32 $\\\\pm$ 0.17 | 2.53 $\\\\pm$ 0.63 |\\n| Fourier head | **0.92 $\\\\pm$ 0.33** | **45.45 $\\\\pm$ 36.36** | **4.98 $\\\\pm$ 0.93** | **3.70 $\\\\pm$ 0.47** |\\n\\nFurthermore, **we also demonstrate that the Fourier head consistently outperforms the Linear head, irrespective of the quantity of Fourier frequencies, for all these additional games.** [(link to graph)](https://drive.google.com/file/d/1TeWuaxUyFN76oaqGDzYZ9wn4Ggq3Ohj1/view?usp=drive_link)\\n\\nLastly, while we agree that the paper\\u2019s RL results are relatively stronger than the time series forecasting results, we want to underscore that a 3.5% accuracy improvement on a recent SOTA forecasting model is no easy feat. For comparison, in the original Chronos paper, the authors find that their novel architecture beats the next best task-specific model by only 0.9%. Our paper\\u2019s 3.5% forecasting improvement didn\\u2019t require any hyperparameter changes to the original configuration, making the Fourier head an easy \\u201cdrop-in\\u201d replacement for a performance increase.\"}", "{\"title\": \"Author response\", \"comment\": \"In the original Fourier Basis Density Model paper, the authors showed that the Fourier inductive bias is indeed natural for representing an (a priori) unbounded range of values once a tanh transformation is involved. In more detail\\u2013one of their modeling tasks involves learning a mixture of 25 Gaussians using a Fourier density with the tanh reparameterization, where most of the probability density is concentrated inside the range $[-10, 10]$. They find that the reparameterized Fourier density model can accurately learn the density, while capturing more modes and using fewer parameters than alternative models. (In case you\\u2019re curious, [here is a link](https://drive.google.com/file/d/1nDU994lZkMQfDNn9yO0tArk6XTQXWcdh/view?usp=sharing) to an illustration of this example from their paper.)\\n\\nWe agree with the reviewer that results on scaling to even larger data size and model size would be interesting. We would like to clarify that our Chronos experiments, which demonstrate the effectiveness of Fourier Head, are already \\u201clarge-scale\\u201d. The model has 20M parameters, and is pre-trained on over 10 million data points (the same setup as used by the Chronos paper). Further scaling up the Chronos model size and data size by orders of magnitude would be beyond the scope of our academic training size. Additionally, the scaling behavior of time-series forecasting models is on its own an active research area, we refer the reviewer to recent papers such as [(Shi et al., NeurIPS 2024)](https://arxiv.org/abs/2405.15124) on this topic.\"}", "{\"title\": \"Author response\", \"comment\": \"## Q3: the paper should include a comparison with models using decoupling such as TEMPO\\n\\nThank you for bringing the TEMPO paper to our attention. Since their paper also studies how to perform time series forecasting using an LLM, we have added a citation to TEMPO in our related works section. However, we believe TEMPO is not directly comparable to the Fourier head, for the following reasons:\\n\\n* TEMPO employs an STL decomposition as a preprocessing step for improved training on time series data using an LLM, arguing that a transformer\\u2019s self-attention mechanism is not guaranteed to be able to disentangle the trend and seasonality components (their Theorem 3.1). \\n* TEMPO proposes novel prompting techniques to improve performance. \\n\\nBoth of these contributions are not comparable to our Fourier head since we propose an alternative way to improve the classification layer in the transformer, which is not related to either preprocessing or prompting.\"}", "{\"summary\": \"The authors argue that current methods for parameterizing a discrete distribution over numerical data suffer from ignoring ordinal structure, which should imply that adjacent discrete buckets will have similar density and therefore \\\"smoothness\\\" in the probability mass function. To fix this oversight, the authors propose a new parameterization on the coefficients of a Fourier series, leading to a smooth function on the interval [-1,1], which is then quantized. The new parameterization is therefore a drop-in alternative to a uniform bucketing of the interval. The method is evaluated on toy univariate densities as well as on an offline reinforcement learning problem and in time-series forecasting, and the result indicate that using Fourier head leads to lower errors in density estimation and higher returns in reinforcement learning.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The proposed method is simple and outlined with clarity in the paper. While the method is not complex, it is relatively novel to my knowledge. The significance is also reasonably large because modeling continuous numerical values using discrete tokens is increasingly popular.\", \"weaknesses\": \"To my understanding, the main goal of the paper is to propose a practical method, and given this goal, the empirical evaluation is not very impressive. I'll break this criticism down into a few subcategories\\n\\n1. Emphasis on smoothness: The authors devote a lot of space and attention to the notion of \\\"smoothness\\\", proposing a new metric to measure it and including this metric in all the evaluations. However, from a practical standpoint it's not clear why we should care about smoothness independent of its effect on metrics like MAE or RMSE. In fact, it's possible to contrive examples where we want less smoothness (related to the square wave examples in the appendix), and it's not clear a priori that the marginal distributions for a particular downstream application will be \\\"smooth\\\". The \\\"smoothness\\\" numbers therefore feel like a distraction from what really matters, which is whether this ordinal inductive bias actually helps the model fit the data distribution. In many cases, the method seems to improve smoothness without affecting reward/loss or vice versa. \\n\\n2. Limited empirical impact: while Fourier head does seem to yield significant benefits in offline RL, it doesn't seem to have a significant effect on time series modeling. The benefit in terms of MASE and especially in terms of WQL is very marginal, and if I were looking for ways to improve my model, I might not adopt the additional complexity needed for such a small improvement, which is probably on par with tuning other hyperparameters or making small architectural changes. It might be helpful to identify possible explanations for why the effect is relatively minor in time series but more pronounced in offline RL. For example, are the next-action distributions significantly different in their multimodality? It might be much more compelling to replace the time series experiments with additional offline RL experiments if that application happens to be the ideal use case for this method. \\n\\n3. Limited baselines: Fourier head is only compared to the most naive possible baseline, uniform binning on [-1, 1]. In practice, there are more widely-used alternatives, such as Gaussian mixture models (GMMs) and quantile regression. Both of these techniques have an ordinal bias and should learn solutions that are much more smooth. I don't know if these methods are viewed as out-of-scope in this paper because they are not learned with cross-entropy loss. From one perspective, it might be reasonable to limit the investigation to discrete tokenization methods and discrete loss functions, but it does make the practical impact lower, as it's hard to tell whether this method is actually the best among all simple options or just an improvement upon simple uniform binning. This particular subcategory of criticism feels especially pertinent given the toy experiments in Section 4, where Fourier head is shown to approximate a GMM. It seems reasonable to conclude that in many cases a GMM should also therefore be able to approximate Fourier head. Is the converse not true and how important are the case where GMMs might not be able to match the performance of Fourier head?\", \"beyond_empirical_evaluation_i_think_there_are_also_other_potential_weaknesses\": \"1. Limited expressiveness: Presumably this method only works for bounded sequences. In the case of RL this might be reasonable if the state and action spaces are constrained. In the case of time series, this limits applications to series without a significant trend component, which would eventually cause the values to exit the range of past observations. \\n\\n2. Additional hyper-parameters in the form of chosen Fourier series frequencies and regularization strength.\", \"questions\": \"I included a few questions in my \\\"Weaknesses\\\" response. I've also included a few below:\\n\\n1. How were the frequencies used in the time series experiments chosen? Were they chosen a priori or through a cross-validation procedure? If cross-validation, how were the splits constructed?\\n\\n2. Why not explore other bases besides the Fourier basis? Is there something intrinsically better about that basis? Alternatively, there are many other parameterizations that would encourage smoothness. For example, one could parameterize only the differences between buckets and regularize these differences to be small. The final probability mass function would be calculated by integrating the differences. Is there a reason to believe a priori that this approach might perform worse?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Following up\", \"comment\": \"We wanted to follow up, and check if our answers have satisfied your concerns. Thanks again for your effort reviewing our paper, and for your engagement throughout the rebuttal period!\"}", "{\"title\": \"Author response\", \"comment\": \"## Q7: exploring bases other than Fourier basis\\n\\nWe want to acknowledge that there may be other bases that perform equally well to the Fourier basis. Our paper is a first step towards exploring this direction, and we expect that our results would inspire follow-up works that explore different bases. However, at a practical level, the Fourier basis is more computationally tractable (i.e. fewer FLOPS) and more stable to compute than many alternatives, such as the Chebyshev basis. Thanks for asking this question.\"}", "{\"title\": \"Author response\", \"comment\": \"Thank you for the thoughtful feedback! Below we provide detailed answers to your questions. We have also uploaded a revised manuscript, with changes written in blue.\"}", "{\"comment\": \"Thank you for making the requested writing changes. I've updated my score.\"}", "{\"title\": \"Author response\", \"comment\": \"Thank you for the thoughtful feedback! We were able to produce the additional results that you asked for, summarized here:\\n* **We conduct an additional ablation which demonstrates that the benefit of the Fourier head persists as the *dataset size* scales.**\\n* **We conduct an additional ablation which demonstrates that the benefit of the Fourier head persists as the *model size* scales.**\\n\\nBelow, we provide more details, as well as question-specific responses. We have also uploaded a revised manuscript, with changes written in blue.\"}", "{\"title\": \"Quick follow up\", \"comment\": \"Dear reviewer 7iR1,\\n\\nThank you again for your constructive feedback and your engagement with our response!\\n\\nAs the discussion phase is ending soon, we would like to quickly follow up with you and see if our responses on _when to use Fourier head_, _time-series forecasting baselines and Chronos_, and _the broader impact of Fourier head_ help address your concerns. We look forward to your comment and will try our best to address any remaining questions you might have!\\n\\nBest,\\n\\nThe authors\"}", "{\"title\": \"Author response\", \"comment\": \"## Q3: Can the Fourier head be generalized to output spaces which are not interval shaped?\\n\\nThis is certainly possible, but beyond the scope of the paper. One way to do this would be to have the learned output categorical distribution be a mixture of Fourier heads and a linear classification head. For example, since the action space for our RL task is a quantized version of $S^1 \\\\sqcup S^1 \\\\sqcup \\\\\\\\{0, 1\\\\\\\\}$, this hypothetical architecture would learn two Fourier heads, each with output dimension 8, as well as a linear classification head, with output dimension 2. Such a model would also have a 3-dimensional classification layer which functions as a high level controller, choosing which of the three classification heads to route the model input to. It would need to be trained with masking to ensure that gradients only update for the correct head.\\n\\nWe don\\u2019t conduct experiments on this generalized classification head for two reasons: first, it is sufficiently complicated that it is beyond scope, and we hope that future works will be inspired to make this applicable to those other domains. And second: our Decision Transformer results show that a single Fourier classification head does a much better job than the linear baseline, even while incorrectly placing a continuity inductive bias between some of the token boundaries, as your review noted. We believe that the positive results, despite this limitation, underscores just how problematic it is that many models continue to use a linear classification head, while the next-token distribution seems to be \\u201cstarving\\u201d for at least some continuity inductive bias. To demonstrate this more robustly, we have added results for three more Atari games. **In addition to our previously reported results on Seaquest, the paper now includes results on BankHeist, DoubleDunk, and Gravitar, which demonstrate that the Fourier head significantly outperforms the baseline.**\\n\\n| Classification Head | BankHeist | DoubleDunk | Gravitar | Seaquest |\\n|-------------------|------------|-------------|-----------|-----------|\\n| Linear head |-0.09 $\\\\pm$ 0.05 | -72.72 $\\\\pm$ 33.08 | 1.32 $\\\\pm$ 0.17 | 2.53 $\\\\pm$ 0.63 |\\n| Fourier head | **0.92 $\\\\pm$ 0.33** | **45.45 $\\\\pm$ 36.36** | **4.98 $\\\\pm$ 0.93** | **3.70 $\\\\pm$ 0.47** |\\n\\nIn this table, we report normalized returns (mean $\\\\pm$ standard deviation, averaged over four seeds) for the Decision Transformer agent across the four Atari games. The results show that the Fourier agent obtains higher returns than the Linear agent across all games.\"}", "{\"title\": \"Additional comments\", \"comment\": \"Thank you for your detailed response and additional experiments. I have some remaining questions/comments:\\n\\n### $1/n^2$ regularization\\nThe explanation of the regularization is still not entirely clear to me. You write:\\n> for the class of Fourier series which have continuous second derivatives, the Fourier coefficients decay on the order of $1/n^2$. To impose this regularity assumption on the learned Fourier densities, we [...] add a regularization term to the loss\\n\\nOn first reading, I understood this to mean that the $k^2$ factor in the regularization $\\\\sum_k k^2 |c_k|^2$ is used to enforce that the coefficients decay like $1/n^2$ (and this causes the Fourier head output to have continuous second derivatives?). However, it's unclear to me what this could mean in the case of truncated Fourier series. For instance, the condition that $c_n\\\\in O(1/n^2)$ is an asymptotic statement; when the Fourier series is truncated, it vacuously holds regardless of whether the regularization is present. If you had some non-asymptotic interpretation of $c_n\\\\sim 1/n^2$ in mind, I don't know what it could be or why the $k^2$ regularization would cause it to hold.\\n\\nI find the motivation in de la Fuente (2024) II.D to be much more clear -- the regularization term is equal to the total squared variation (Eq 10), which is a measure of smoothness.\", \"also\": \"it seems that smoothness could be increased either by increasing regularization strength or decreasing the number of Fourier frequencies $N$. How do these relate? When is it appropriate to tune one vs the other?\\n\\n### Exposition of the Fourier head and the de la Fuente (2024) paper\\nAdditionally, in my opinion [de la Fuente et al (2024)](https://arxiv.org/abs/2402.15345) does a better job of explaining some other details:\\n- The purpose of first learning autocorrelation coefficients then converting to Fourier coefficients (Alg. 1 Step 4) is explained in their section II.A\\n- The normalization by $1/c_0$ (Alg. 1 Step 5) is explained in their II.B.\\n\\nIt would be helpful to briefly mention these to help explain Algorithm 1. (Or maybe just refer the reader to the sections in their paper.)\\n\\nOverall, it appears that your work takes significant inspiration from the de la Fuente paper. Your paper cites theirs at several places, which is good, but it might also be appropriate to mention them in the intro and/or when the Fourier head is introduced in Sec 2.1/2.2. My understanding is that the Fourier head is effectively the \\\"Fourier basis density model\\\" that they propose swapped in for the last layer of a deep neural model.\\n\\nApologies for not bringing this up earlier -- I hadn't looked at the de la Fuente paper at the time.\\n\\n### Point estimate vs probabilistic forecasting\\nThank you for the pointwise regression experiments. Intuitively, the fact that the pointwise regression predicts the mean even for bimodal distributions is not ideal. However, if one only cares about MSE, then your experiments show that the point estimate is perfectly fine. The paper would be better motivated if you could mention some examples of when the MSE alone is insufficient, and predicting the full density is superior to just the point estimate. It looks like the RL setting is one such example, since the agent needs to explicitly sample from the output distribution. (Maybe this should be made more explicit in the paper.) Are there similar examples for the time-series setting?\"}", "{\"title\": \"Author response\", \"comment\": \"## Q4: providing more details on the Decision Transformer and Chronos experiments\\n\\nFor both Decision Transformer and Chronos, we followed the same training recipes from their original implementations. At your suggestion, we\\u2019ve added brief summaries of them to the paper. We will share some details here as well.\\n\\n* *Our Decision Transformer experiments:* following the original implementation, we trained on 500k transitions observed by a DQN agent during training, for 5 epochs. We trained on the same model size as the original implementation (a GPT-1 model with approx. 2.012M parameters) which takes about four hours on a single GPU.\\n\\n* *Our Chronos experiments:* following the original implementation, we trained for 200k steps, on the same model size as the original implementation (a T5 model with approx. 20M parameters) which takes just under 48 hours on 8 GPUs.\"}", "{\"comment\": \"I thank the authors for the clarifications and additional scaling results. Regarding the range of representable values, I agree in principle this can be addressed by compactifying the domain. But the more relevant question is whether the inductive bias of the model is natural for representing an unbounded range of values once a tanh like transformation is involved.\\n\\nIt would be great to show similar scaling experiments for other tasks such as the Chronos experiment, where both the model and data size are much larger than the decision transformer experiment (e.g. model only has 2M parameters) and arguably more representative of the practical setting.\"}", "{\"title\": \"Following up\", \"comment\": \"We just wanted to follow up, and say thank you for your effort reviewing our paper, and for your engagement throughout the rebuttal period!\"}", "{\"title\": \"Author response\", \"comment\": \"## Q6: how were frequencies chosen for the time series experiments?\\n\\nWe didn\\u2019t use any clever procedure for selecting the Fourier frequencies for the time series experiments. We just started by sweeping over powers of $2$ to be efficient, because the Fourier head scaling law indicates that you might want to try up to $\\\\mathrm{n_bins} / 2 = 2048.$\"}", "{\"title\": \"Author response\", \"comment\": \"## Q4: Is it a limitation that the Fourier series can only model periodic functions?\\n\\nIn our large-scale experiments we find that this isn\\u2019t an issue, for the very reason that you mentioned\\u2013namely, for the time series task the distribution is close to zero near the boundaries. **If needed, a tanh reparameterization can be used to map the domain $[-1,1]$ to the real line, and you can then truncate it. This would let the Fourier head learn non-periodic functions.** We have added an explanation about this in Section 2.4 of the updated manuscript.\\n\\nHowever, maintaining the original periodic formulation and using a Fourier head with a sufficiently large number of frequencies tends to solve the problem. This follows from the Fourier theory: if you have a smooth function on $[-1,1]$, but $f(-1)\\\\neq f(1)$, then you can learn a Fourier series that does a very good job approximating $f$ over the open interval $(-1,1)$, and it will only have problems near the boundaries $\\\\pm 1$. Using more frequencies guarantees that the approximation improves close to the boundaries.\"}", "{\"title\": \"Author Response Summary\", \"comment\": [\"We thank all the reviewers for the detailed feedback. We have conducted the additional requested experiments and incorporated the results in the paper:\", \"In response to Reviewer qzaH\\u2019s question about whether the benefit of the Fourier head persists at various *data scales*, we conducted an additional ablation study which demonstrates that the Fourier head consistently outperforms the Linear head baseline, across dataset sizes.\", \"In response to Reviewer qzaH\\u2019s question about whether the benefit of the Fourier head persists at various *model scales*, we conducted an additional ablation study which demonstrates that the Fourier head consistently outperforms the Linear head baseline, across model sizes.\", \"In response to Reviewer aPiu\\u2019s suggestion that we demonstrate further empirical impact with additional RL experiments, we provide additional RL experiments on three more Atari games, where the Fourier head agent consistently obtains higher returns than the linear head agent.\", \"In response to Reviewer aPiu\\u2019s suggestion that we compare the Fourier head to more baselines, we provide an additional baseline by constructing a Gaussian Mixture Model-based classification head, as well as a more challenging synthetic dataset using the Beta distribution. Our results on this extended baseline demonstrate that the Fourier head\\u2019s flexibility is needed to model distributions which are even slightly more complicated than Gaussians.\", \"In response to Reviewer WB6H's suggestion that we consider a regression baseline, we expanded the toy example experiments with a Pointwise Regression head. We demonstrate that this new model \\u201cregresses to the mean\\u201d because of the probabilistic nature of the dataset, and as a result it doesn\\u2019t have any advantage over the classification heads, since the classification heads model the probabilistic datasets by directly modeling the underlying probability distributions.\", \"In terms of writing updates to the manuscript (written in blue in the updated manuscript):\", \"In response to Reviewers qzaH and 7iR1, we added references to more LLM time series papers.\", \"In response to Reviewer qzaH, we added more details on the Decision Transformer and Chronos experiments to the appendix.\", \"In response to Reviewer WB6H, we clarified why quadratic fourier regularization is useful even when the Fourier series is truncated, and we fixed some minor typos.\", \"In response to Reviewer aPiu, we added more details about how to choose hyperparameters.\", \"In response to Reviewers aPiu and qzaH, we added details about how the Fourier head can be adapted to model unbounded values.\"]}", "{\"title\": \"Author response\", \"comment\": \"## Q2: the paper should include experimental results for the TimeLLM and GPT4TS models\\n\\nThank you for highlighting the TimeLLM and GPT4TS papers. In the updated manuscript we have added citations to them in the related works section. **However, we don\\u2019t believe that a direct comparison to those models is relevant, because those models require in-domain training, whereas Chronos does zero-shot domain transfer.**\\n\\nMore precisely, both the models TimeLLM and GPT4TS require fine-tuning and testing on each dataset separately. In contrast, Chronos is pretrained on a single dataset consisting of many real and synthetic time series, and it is designed to be evaluated on domains unseen during training. In our paper, we evaluate Chronos on 20 benchmark datasets unseen during training, without any additional dataset specialization or fine-tuning.\\n\\nAdditionally, we would like to emphasize that the goal of the paper is not to present the Fourier head as a tool just for time series, but rather as a tool for any domain where a classification head can benefit from having more continuous structure. Towards this, we would like to highlight the diverse types of transformer-based models that we used to study the Fourier head\\u2013our time series experiments use an encoder-decoder T5 architecture, our reinforcement learning experiments use a decoder-only GPT-class architecture, and our audio toy example uses an audio-spectrogram transformer.\"}", "{\"title\": \"Author response\", \"comment\": [\"Thank you for the thoughtful feedback! We were able to produce the additional results that you asked for, summarized here:\", \"We demonstrate additional empirical impact by adding results for three more RL tasks. In all these tasks, the Fourier head agent obtains much larger returns than the baseline.\", \"We demonstrate additional empirical impact by including two more ablations: one which demonstrates that the benefit of the Fourier head persists as the *dataset size* scales, and one which demonstrates that the benefit of the Fourier head persists as the *model size* scales.\", \"We introduce a new GMM baseline model, as well as a more complicated synthetic dataset, and demonstrate that the Fourier head\\u2019s added flexibility is needed to model this more challenging dataset.\", \"Below, we provide more details, as well as question-specific responses. We have also uploaded a revised manuscript, with changes written in blue.\"]}", "{\"title\": \"Author response\", \"comment\": \"## Q3: the toy experiment has limited baselines, and in particular would benefit from a comparison with a Gaussian Mixture Model\\n\\nTo address this concern we have implemented a GMM classification layer for which the means and standard deviations are learned, and the number of Gaussian components is a hyperparameter. As you predicted, our results show that substituting the GMM head in our toy experiment yields better performance than the Fourier head on the previous datasets. This is to be expected since the conditional distributions being learned in those datasets are precisely GMMs. \\n\\nTo highlight the flexibility of the Fourier head over the GMM head, we have added a more challenging dataset to the toy example, where the conditional distributions are based on a Beta distribution.\\n**On this new Beta dataset, the Fourier head achieves KL divergence 0.191 while the GMM head (with 2 Gaussians) achieves KL divergence 0.407, more than twice that of Fourier. Since the Beta distribution is a common and naturally occurring distribution, this comparison shows the advantage of using the Fourier head when the underlying next-token distribution might be complicated and unknown.**\\n\\nWe include results with the GMM head and Beta dataset in our latest draft. In particular, our numerical results show that the Fourier head learns the Beta distribution more robustly than both the GMM head and the linear head:\\n\\n| Dataset | Linear head (KL) | GMM head (KL) | Fourier head (KL) |\\n|-----------|-------------------|--------------------|--------------------|\\n| Beta | 0.234 $\\\\pm$ 0.032 | 0.407 $\\\\pm$ 0.012 | **0.191 $\\\\pm$ 0.016** |\\n\\nWe also analyze the effect of Fourier frequencies on learning this Beta dataset, and we find that the Fourier head indeed outperforms the other classification heads for sufficiently many frequencies. [(link to graph)](https://drive.google.com/file/d/1Ux_8fOkN98go58AeDJ1SzZweMWyBepwx/view?usp=drive_link)\\n\\nLastly, thank you very much for asking this question. Our choice of synthetic datasets in the toy experiment certainly makes it seem that the Fourier head could easily be replaced by a learned GMM.\"}", "{\"title\": \"Author response\", \"comment\": \"We appreciate the reviewer\\u2019s engagement with us and would like to put the contributions of our Fourier Head in proper perspective with respect to the time-series forecasting methods the reviewer brought up.\\n\\n1. *When to use Fourier Head:* As illustrated in our new toy experiments (revised manuscript, Figure 3 and Table 6), it is desirable to use the Fourier Head when we aim to model **complex** probabilistic distributions without knowing a priori the family of distributions. Particularly, Figure 3 shows that when the underlying distribution follows a Beta distribution, both vanilla linear head (no inductive bias) or GMM head (wrong inductive bias) are significantly outperformed by Fourier Head. Table 6 and Figure 9 [(link to figure)](https://drive.google.com/file/d/1ikH_DDJcVCTGslCzKxvSlYL326g8tpK2/view?usp=sharing) further show that when the target distribution has multiple modes (e.g. GMM or Beta), pointwise methods regress to the mean, and fail to capture any of the modes.\\n\\n2. *TS forecasting baselines:* To the best of our knowledge, GPT4TS, UNITS, and MOMENT are all pointwise forecasting models; they are hence \\u201cincompatible\\u201d with Fourier Head, which is designed to model (discretized) distributions. However, we believe the quoted statement from us remains true: one just needs to change both the \\u201chead\\u201d and the training objective (away from pointwise estimation), which we believe may lead to improved empirical performance as discussed in our response above. As the reviewer could see, these improvements are orthogonal to the design choices made by GPT4TS, UNITS, or MOMENT, meaning our contributions are complementary, not competing against these prior works.\\n\\n3. *Validity of the Chronos implementation:* Chronos is a state-of-the-art probabilistic TS forecasting model recently accepted by the TMLR journal. We integrated the Fourier Head implementation into Chronos for its competitive performance, as well as open-source data preparation and training pipelines. We acknowledge that compared with TimeLLM or other similar methods, Chronos requires an additional training / fine-tuning stage. We would also like to clarify that the vocabulary construction process is independent from the requirement of \\u201ctraining from scratch\\u201d: Chronos actually explored fine-tuning with pre-trained LLM weights but observed that it has marginal impact on the performance. Additionally, updating the \\u201cvocabulary\\u201d, or even incorporating continuous features, is a standard practice, as exemplified by numerous multimodal LLM works, such as LLAVA. We hence believe this should not be considered as a fundamental disadvantage.\\n\\n4. *Broader impact of Fourier Head:* Finally, we would like to clarify that we have demonstrated broader applicability of the Fourier Head beyond the time-series forecasting applications. We invite the reviewer to check our decision transformer experiments, especially the newly added evaluations [(link to figure)](https://drive.google.com/file/d/1TeWuaxUyFN76oaqGDzYZ9wn4Ggq3Ohj1/view?usp=drive_link) where Fourier Head demonstrates consistent and significant performance improvements.\\n\\nLastly\\u2013please let us know if there are additional questions that we can provide further information / experiments on. Thank you again for your time and effort reviewing our paper!\"}", "{\"title\": \"Author response\", \"comment\": \"## Q5: to what extent is it a weakness of the Fourier head that the user must select hyperparameters?\\n\\n**We have included additional experimentally-backed details to the manuscript to make it easier for the user to select hyperparameters**. Thank you for giving us the opportunity to clarify this point. In short:\\n\\n* When picking *regularization strength*\\u2013we find that in the low-frequency domain (i.e. frequencies in the single digits) using $\\\\gamma=0$ works best, and in the high-frequency domain (i.e. greater than 10 frequencies) using $\\\\gamma=10^{-6}$ works best. \\n\\n* When picking *Fourier frequencies*\\u2013our Decision Transformer results show that the model is reasonably robust to the choice of the number of frequencies. For example, our latest Decision Transformer results [(link to graph)](https://drive.google.com/file/d/1TeWuaxUyFN76oaqGDzYZ9wn4Ggq3Ohj1/view?usp=drive_link) show that the Fourier agent obtains higher returns than the Linear agent, irrespective of the quantity of Fourier frequencies. Meanwhile, for more complex problems such as zero-shot probabilistic time-series forecasting, one may prefer to use more frequencies as this results in a Fourier head with more modeling power. This is stated in our Theorem 3.3 (Fourier head scaling law) and demonstrated in Table 3.\"}", "{\"title\": \"Author response\", \"comment\": \"## Q2: the Fourier head can only represent a finite range of values\\n\\nIn practice, we find that this is not an issue. **If needed, a tanh reparameterization can be used to map the domain $[-1,1]$ to the real line. This would let the Fourier head learn unbounded values. We have added an explanation about this in Section 2.4 of the updated manuscript.**\\n\\nHowever, even if we decide to use a bounded domain (as is the case in all the experiments in the manuscript), this still isn\\u2019t an issue, as evidenced by the original Chronos model\\u2019s SOTA time series forecasting accuracy, with datasets which are not a priori bounded within some finite range. Those authors modeled unbounded sequences using a combination of two techniques: 1) normalizing the time series so the individual values are small, and 2) ensuring that the next-token distribution is defined over a sufficiently wide interval.\", \"in_more_detail\": \"in the Chronos paper, the authors normalize the time series so that the mean of the absolute value in the historical context is equal to 1; this ensures that the time series values are small. Then, the architecture defines the range of tokens as equally spaced inside of $[-15, 15]$; this ensures that the model is capable of learning from the examples with significant trend components. With this normalization and tokenization strategy, the Chronos model achieved SOTA accuracy because it turns out that even time series with aggressive trend components are normalized to values that can be forecasted accurately by the model.\"}", "{\"title\": \"Author response\", \"comment\": \"## Q1: alternative ways of extracting continuous probabilistic predictions from LLMs over numerical data\\n\\nThank you for mentioning the papers [1] Gruver et al and [2] Requeima et al, we have added their citations to the discussion in the related works section. However, we note that the Fourier head avoids a practical limitation from both of those methods. **In short, the Fourier head\\u2019s design ensures that the categorical distribution which is learned during training is exactly the same distribution which is sampled from during evaluation. This is not the case for the methods in [1] and [2].**\\n\\nIn more detail\\u2013in the methods from [2], the authors state the following limitation of their work: *It must be noted that this approach does not guarantee that P(12) yields the mass assigned by the LLM to values in the bin [12, 13). However, we note that our method defines a valid predictive distribution, and we empirically observed that our predictive distribution closely matches the sampling distribution.* In the appendix, they share images that show how the distributions are visually similar, but not identical. In contrast: in the Fourier head, the mass assigned to any range of numbers is exactly equal to the integral. This is a theoretical and practical benefit of modeling tokens over numerical distributions continuously using a smooth density (as in the Fourier head) rather than learning a discretized hierarchical softmax (as in [1] and [2]).\"}", "{\"title\": \"Author response\", \"comment\": \"We have conducted an additional study to analyze whether dataset size has any effect on the relative performance of the Linear head and the Fourier head for Chronos. Our results show that, across dataset sizes, the Fourier head indeed yields more accurate forecasts than the Linear head. We have updated the manuscript with these changes. (And [here is a link](https://drive.google.com/file/d/1nMR6dNN_s3eJZ1QXFc1wAXjL9odXvM4v/view?usp=sharing) to the figure that we added to the paper.) Thanks for the suggestion!\\n\\nAdditionally, we have started running experiments on scaling model size for Chronos, and we are hoping to finish them before the end of the extended rebuttal period.\"}", "{\"summary\": \"The paper proposes the \\\"Fourier head\\\" as an alternative to linear classification heads for tasks where the output can be thought of as a quantization of a continuous space. The Fourier head learns the Fourier coefficients of the target function and then quantizes it instead of learning the quantized values directly. The authors theoretically and empirically demonstrate that varying the number of frequencies in the Fourier head trades off between smoothness and modelling performance. The Fourier head is shown to improve over the baseline linear head on a toy task, an agentic decision-making task, and for time-series modelling.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The paper provides evidence that the Fourier head is an improvement over the baseline linear head in a wide variety of settings (toy example, agentic decision making, time-series modeling).\", \"The Fourier head improves both smoothness and accuracy (MLE, MASE, WQL).\", \"The exposition of the Fourier head is clear and easy to understand.\", \"Various practical details (training objective, hyperparameter choice, regularization, binning strategy) are provided. This is helpful for reproducibility and to those who wish to apply the Fourier head to other tasks.\"], \"weaknesses\": [\"For regression tasks with continuous-valued target output, it is not clear to me the practical motivation for outputting an entire probability distribution, instead of just a point estimate. Thus one of the main advantages of the Fourier head, that its outputs are more smooth, feels somewhat unmotivated to me. I would like to see more discussion of why exactly the smoothness is beneficial in practice.\"], \"questions\": [\"The justification of the Fourier regularization term as imposing that the coefficients decay as $1/n^2$ is a little strange to me -- this is an asymptotic condition and, in practice, there are a finite number of coefficients, so isn't the condition always vacuously met?\", \"For the Decision Transformer, the output space as described in the paper is more naturally a quantization of $S^1 \\\\sqcup S^1\\\\sqcup \\\\\\\\{0,1\\\\\\\\}$ instead of $[-1, 1]$. (Either a shooting direction or a moving direction, each of which takes eight different values arranged on the circle $S^1$. Also two actions without an associated direction.) It would be interesting to see if the Fourier head can be generalized to output spaces that are not naturally interval-shaped.\", \"Actually, if I remember correctly, functions can only be approximated by Fourier series if they are periodic, i.e. functions on $S^1$. I suppose this does not affect the toy example and the time-series modelling, since the interval is chosen to be large enough that the distribution is near zero at the boundaries and so is approximately periodic. But I wonder if this is a limitation in other settings.\", \"Often, for tasks with continuous-valued target output (e.g. the toy example and time-series example), only a point estimate is necessary, not the full distribution. Hence a good baseline to include for the toy example is an MLP model with only one output dimension (possibly with atan nonlinearity to map outputs to the interval), evaluated on MSE. Likewise for the time-series example, but with MASE.\"], \"minor_typos\": [\"Line 193: \\\"hyperparamter\\\" -> \\\"hyperparameter\\\"\", \"Line 205: $c_n$ should be $c_k$\", \"Line 244: \\\"and $D$ be some measure of discrepancy such that $L^2$,\\\" should be \\\"such as\\\"?\", \"Line 257: \\\"Denote by $g_\\\\sigma(x)$ is\\\" delete \\\"is\\\"\", \"Line 515: \\\"descretized\\\" -> \\\"discretized\\\"\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Reviewers,\\n\\nThis is a gentle reminder that the authors have submitted their rebuttal, and the discussion period will conclude on November 26th AoE. To ensure a constructive and meaningful discussion, we kindly ask that you review the rebuttal as soon as possible and verify if your questions and comments have been adequately addressed. \\n\\nWe greatly appreciate your time, effort, and thoughtful contributions to this process.\\n\\nBest regards,\\nAC\"}", "{\"title\": \"Author follow-up\", \"comment\": \"We wanted to follow up to make sure that your concerns are being properly addressed. Please let us know if there are additional questions that we can provide further information / experiments on. Thank you again for your time and effort reviewing our paper!\"}", "{\"metareview\": \"This paper proposes a new mechanism to model discrete distributions using Fourier representations. The motivation is to better model the underlying low-frequency signals in the Fourier space. Empirical validation is done in two domains for offline RL and time series foundation models. The reviewers mainly appreciated the novelty of the approach and the ablations for the design choices. Their main concerns were regarding the limited empirical validation both in terms of baselines and additional environments for testing.\", \"additional_comments_on_reviewer_discussion\": \"The paper was borderline with concerns ranging from clarity of exposition to limited experimentation. The authors were able to address some of these limitations. Some choices are still a little ad-hoc: for example, why focus on offline RL and time-series modeling, and not language modeling itself? The latter is the prime example of discrete distributions. Similarly, the Atari experiments are focussing on a non-overlapping set of environments from the original paper. Overall, I think more could have been done by the authors in either demonstrating universal utility (which I do not think is the case based on current empirical evidence), or focussing deeper in a specific domain (either offline RL, time-series modeling) and doing more extensive experimentation to identify the exact reasons for improved performance in those domains, or add additional benchmarks.\"}", "{\"title\": \"Author response\", \"comment\": \"## Q3: how does the Fourier head\\u2019s performance scale with training data and model size?\\n\\nThank you for suggesting adding these valuable experiments, which we have added to the revised draft. We summarize the results of our additional ablations here, and also include links to the graphs:\\n\\n* Ablation study #1: We show the Fourier head is better at learning high-quality next action distributions than the Decision Transformer with a Linear head *across all dataset sizes*. [(link to graph)](https://drive.google.com/file/d/16qJsSBJ9xwT6PiqYqYco6pBkUiHaXX9x/view?usp=sharing)\\n* Ablation study #2: We show the Fourier head is better at learning high-quality next action distributions than the Decision Transformer with a Linear head *across all model sizes*. [(link to graph)](https://drive.google.com/file/d/1-mnyWD4F1Rqgj3-xSVenRp17UWGAMGU1/view?usp=drive_link)\\n\\n**These results demonstrate that the benefits of the Fourier head indeed persist with scale.**\"}", "{\"title\": \"Author follow-up\", \"comment\": \"We wanted to follow up to make sure that your concerns are being properly addressed. Please let us know if there are additional questions that we can provide further information / experiments on. Thank you again for your time and effort reviewing our paper!\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"Author response\", \"comment\": \"## Q5: the minor typos\\n\\nThanks for catching those, we\\u2019ve fixed them in the latest draft :)\"}", "{\"title\": \"Author response\", \"comment\": \"As you requested, we have conducted an additional ablation study to analyze whether *model size* has any effect on the relative performance of the Linear head and the Fourier head for the Chronos time series forecasting task. Our results show that, across model sizes, the Fourier head indeed yields more accurate forecasts than the Linear head. It looks like we are not allowed to update the manuscript in OpenReview anymore, so [here is a link](https://drive.google.com/file/d/1ZsGoE1NN1GufSni9TP1MsSSJv2jQ3Oqs/view?usp=sharing) to the paper updated with these changes. (And [here is a link](https://drive.google.com/file/d/1S7bsytI1oN8W27medI9Po1bcZBjCMpbs/view?usp=sharing) to just the figure that we added to the paper.)\\n\\nThanks for your patience during this rebuttal period while we ran our experiments, and thanks for suggesting that we run this experiment. Please let us know if you have any other questions!\"}", "{\"summary\": \"The paper introduces a novel Fourier head for large language models (LLMs), designed to improve the modeling of continuous structures in non-linguistic tokens, such as decision-making sequences in games and time series forecasting\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"Fourier head allows LLMs to better capture continuous structures in non-linguistic tokens, addressing the limitation in traditional models that use softmax over discrete bins. The authors provide both theoretical justifications and empirical analysis.\", \"weaknesses\": \"1. The author posits that Fourier Head can endow the model with a continuity prior, , which can be described as semantic adjacency. However, since LLMs inherently incorporate attention mechanisms that aggregate tokens with higher similarities, the contribution of the Fourier Head seems incremental.\\n\\n2. Regarding the time series prediction section, the author has employed the T5 architecture, yet the baseline comprises only this architecture, which is overly simplistic. There is a significant body of work on time series LLMs currently, with most eventually employing a linear layer (could also be replaced with a Fourier head), such as TimeLLM GPT4TS[1,2]. I believe the author needs to further supplement the experiments.\\n\\n3. Additionally, I think the effectiveness of the Fourier Head may stem from its ability to analyze input frequency and amplitude through Fourier series. The author should consider comparing methods that are based on decoupling[3].\\n\\n[1]Jin M, Wang S, Ma L, et al. Time-llm: Time series forecasting by reprogramming large language models[J]. arXiv preprint arXiv:2310.01728, 2023.\\n\\n[2]Zhou T, Niu P, Sun L, et al. One fits all: Power general time series analysis by pretrained lm[J]. Advances in neural information processing systems, 2023, 36: 43322-43355.\\n\\n[3]Cao D, Jia F, Arik S O, et al. TEMPO: Prompt-based Generative Pre-trained Transformer for Time Series Forecasting[C]//The Twelfth International Conference on Learning Representations.\", \"questions\": \"I am unclear about the organization of the paper, such as why the related work is placed in the latter half.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"None\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Author follow-up\", \"comment\": \"We wanted to follow up to make sure that your concerns are being properly addressed. Please let us know if there are additional questions that we can provide further information / experiments on. Thank you again for your time and effort reviewing our paper!\"}", "{\"summary\": \"The paper proposes a Fourier Head based on the Fourier series as a replacement for the usual linear classification head to induce continuous densities across the class IDs. It presents a theoretical analysis of the expressiveness and smoothness trade-off as the number of frequencies increases and empirically shows the advantage of the Fourier Head over the conventional linear head in tasks with output classes with a continuous structure.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The proposed Fourier Head layer is well-motivated in domains where classes have a continuous structure\", \"The method is straightforward and clearly explained\", \"Visualizations clearly demonstrate the advantage of the Fourier head on toy problems in learing continuous densities\", \"Experiments in RL and time-series show the Fourier head can improve performance in non-toy settings\"], \"weaknesses\": [\"As the paper is focused on improving LLM's ability to model numerical values, there are important related works that explore alternative ways of extracting continuous probabilistic predictions from LLMs over numerical data [1, 2], which are worth discussing. These methods use a hierarchical representation of the numerical values, encouraging nearby values to have similar densities, as they do not correspond to independently predicted classes. These methods therefore do not have the limitations of \\\"not consider any continuous structure that resides among the tokens\\\", which the Fourier head claims to address.\", \"Similar to methods based on classification over binned values, the Fourier head can only represent a finite range of values. Methods like [1, 2] in principle do not have this issue.\", \"The advantage of using the Fourier head seems most significant with small models trained on limited data. At a large scale, the model should be able to learn the continuous structure in the output classes, diminishing the benefit of using the Fourier head. It would be useful to show how the benefit of replacing the linear head with the Fourier scale with training data and model size, such as for Chronos models of different sizes.\", \"[1] Gruver et al. 2023. Large Language Models Are Zero-Shot Time Series Forecasters\", \"[2] Requeima et al. 2024. LLM Processes: Numerical Predictive Distributions Conditioned on Natural Language\"], \"questions\": [\"Can you provide more details on the Decision Transformer and Chronos experiments? How did you choose the size of the models, and how long to train?\", \"Can you show how the benefit of using the Fourier head varies as the model size or amount of training data increases? That is, does the benefit of the Fourier head persist at scale or does it vanish?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Author response\", \"comment\": \"Thank you for the thoughtful feedback! We were able to produce the additional results that you asked for, summarized here:\\n\\n* We introduce a new pointwise regression baseline model, and demonstrate that the Fourier classification head outperforms it.\\n* We provide more evidence that the Fourier head can model latent spaces which are not circle-shaped, by adding results for three more RL tasks where the latent space is a quantization of $S^1\\\\sqcup S^1\\\\sqcup \\\\\\\\{0,1\\\\\\\\}$. In all these tasks, the Fourier head agent obtains significantly larger returns than the baseline.\\n\\nBelow, we provide more details, as well as question-specific responses. We have also uploaded a revised manuscript, with changes written in blue.\"}", "{\"title\": \"Author response\", \"comment\": \"## Q2: why should we impose the $1/n^2$ Fourier regularization term during training, if it\\u2019s a truncated Fourier series?\\n\\nIf the amplitudes of a truncated Fourier series decay like $1/n^2$, this ensures that the function is smoother.\\n\\nIn more detail\\u2013in the presence of limited data, there are many possible choices of Fourier coefficients which may fit the data equally well. We can express a preference towards smoother densities by penalizing the higher-order Fourier coefficients more than the lower-order Fourier coefficients. **Intuitively, this ensures that the model extracts low-frequency signal from the training data, while ignoring high-frequency noise.** We\\u2019ve added a summary of this in Section 2.4 of the revision. Thanks for this question, it seems like we didn\\u2019t make this point clear enough in the submission.\"}", "{\"title\": \"Author response\", \"comment\": \"## Q1: for tasks with a continuous-valued target output, does it make more sense to output a point estimate?\\n\\n**We have conducted additional experiments, and our results show that the probabilistic nature of the tasks leads to a pointwise regression model \\u201cregressing to the mean\\u201d and performing poorly on the MSE metric.**\", \"in_more_detail\": \"we have created another toy example baseline model with the same architecture as the Fourier head model, but where we substituted the last classification layer with a linear layer having a single output dimension, and where we train using MSE loss. We refer to this model as \\u201cPointwise Regression\\u201d. When we evaluate each of Pointwise Regression, Fourier head, and Linear head on the toy experiment datasets, their average MSE performances across seeds are all approximately equal to each other, showing no observable advantage in terms of MSE for one model over another.\\nThis is to be expected, given the probabilistic nature of the datasets. We include here our new results:\\n\\n| | Pointwise Regression | Linear Classification | Fourier Classification |\\n|----------|-------------------|---------|----------|\\n| Gaussian | 0.010 \\u00b1 0.001 | 0.011 \\u00b1 0.001 | 0.012 \\u00b1 0.001 |\\n| GMM-2 | 0.121 \\u00b1 0.004 | 0.126 \\u00b1 0.004 | 0.123 \\u00b1 0.005 |\\n| Beta | 0.275 \\u00b1 0.009 | 0.276 \\u00b1 0.008 | 0.275 \\u00b1 0.008 |\\n\\nTo demonstrate why a simple pointwise estimate in a probabilistic setting might be inadequate, we also created a new dataset (the \\\"Beta\\\" dataset in the above table) for which the conditional distributions are symmetric about $0$ by construction. More precisely, the conditional distribution of $z$ given $(x,y)$ is $b * \\\\mathrm{Beta}(100 |x|, 100|y|)$, where $b$ is a random variable that is $+1$ or $-1$ with equal probability. We have used the Beta distribution since it is a non-Gaussian distribution, so it illustrates the flexibility of the Fourier head in learning a wide variety of naturally occurring complicated distributions. **In fact, on the Beta dataset, our results show that the Pointwise Regression, Linear classification head, and Fourier head models all have identical MSE very close to 0.275, which is also the MSE of a naive baseline model that always predicts 0 regardless of the input.** In other words, the Pointwise Regression model collapses to always predicting values very close to 0 since it regresses to the mean of the underlying distribution, completely missing any information about higher moments of the distribution.\\n\\nOn the other hand, the Fourier head is able to learn a high quality reconstruction of the underlying conditional distribution, showing why one might not choose to rely on a pointwise estimate even in our toy setting. We have included the Pointwise Regression baseline MSE values in the latest draft.\\nWe also include visual samples of the model predictions alongside the true conditional distribution, illustrating this unwanted \\\"regression to the mean\\\" behavior that arises. [(link to graph)](https://drive.google.com/file/d/1ikH_DDJcVCTGslCzKxvSlYL326g8tpK2/view?usp=sharing)\\n\\nGiven these results in the toy setting, we believe that using a classification head that can generate a probabilistic forecast rather than a pointwise regression in the Chronos experiments is desirable since the underlying next-token distributions are complicated and not well-captured by regressing to the mean.\\n\\n(On a technical note: your suggestion prompted us to realize that for evaluating the MSE performance of the categorical distributions predicted by the Fourier and Linear head, one should consider their expected values as the model\\u2019s implied pointwise estimate, rather than just the bin of maximum probability. This is especially applicable since some of our toy distributions are bimodal. As a result, we have updated our MSE metrics in the paper. So, thank you for asking this question.)\"}", "{\"title\": \"Rebuttal acknowledgement\", \"comment\": \"Thank you for the detailed rebuttal. These new experiments should make the next version of the paper more convincing. I'll raise my score to support acceptance.\"}", "{\"title\": \"Author response\", \"comment\": \"## Q1: the Fourier head is incremental because LLMs inherently aggregate tokens with higher similarities\\n\\nThe Fourier head\\u2019s job is to ensure that neighboring tokens have similar likelihoods during the next-token prediction step, and this is not guaranteed by LLMs. In fact, in the paper we provide examples of cases where LLMs fail to have neighboring tokens having similar likelihoods.\", \"more_precisely\": \"if you enumerate the $m$ tokens $t_1, t_2, \\u2026, t_m$, then the Fourier head ensures that the learned next-token softmax distribution satisfies the property that the likelihood of $t_i$ is close to the likelihood of $t_{i+1}$. We demonstrate in our experiments that a general attention-based transformer with a linear classification head doesn\\u2019t satisfy this same property. We illustrate this in a figure in the paper:\\n [(link to graph)](https://drive.google.com/file/d/1GNbjx3DyQhea_pRCmhLkrs8NLs6ldeLq/view?usp=drive_link). \\n\\nThis experiment demonstrates that an LLM with a standard classification head outputs a \\u201cjagged\\u201d next-token distribution, whereas the Fourier classification head outputs a \\u201csmooth\\u201d next-token distribution.\"}", "{\"title\": \"Author response\", \"comment\": \"## Q4: does the Fourier head have limited expressiveness because it is bounded?\\n\\n\\nIn practice, we find that this is not an issue. **If needed, a tanh reparameterization can be used to map the domain $[-1,1]$ to the real line. This would let the Fourier head learn unbounded values. We have added an explanation about this in Section 2.4 of the updated manuscript.**\\n\\nHowever, even if we decide to use a bounded domain (as is the case in all the experiments in the manuscript), this still isn\\u2019t an issue, as evidenced by the original Chronos model\\u2019s SOTA time series forecasting accuracy, with datasets which are not a priori bounded within some finite range. Those authors modeled unbounded sequences using a combination of two techniques: 1) normalizing the time series so the individual values are small, and 2) ensuring that the next-token distribution is defined over a sufficiently wide interval.\", \"in_more_detail\": \"in the Chronos paper, the authors normalize the time series so that the mean of the absolute value in the historical context is equal to 1; this ensures that the time series values are small. Then, the architecture defines the range of tokens as equally spaced inside of $[-15, 15]$; this ensures that the model is capable of learning from the examples with significant trend components. With this normalization and tokenization strategy, the Chronos model is SOTA because it turns out that even time series with aggressive trend components are normalized to values that can be forecasted accurately by the model.\"}", "{\"title\": \"Author response\", \"comment\": \"## Q1: why all the emphasis on \\u201csmoothness\\u201d?\\n\\nA primary goal of the paper was to improve performance on the original metrics from the tasks we considered (e.g. accuracy for time series, returns for RL), and we accomplished that using the Fourier head, while also showing that the learned categorical distributions are smoother for the Fourier head than the linear head.\\n\\nAnd you\\u2019re absolutely right that, in the context of this paper, smoothness is not a metric that we should care about on its own. Our experiments show that the Fourier head yields smoother categorical distributions than the Linear classification head. But we also find that smoothness of the categorical distribution doesn\\u2019t necessarily lead to better downstream performance.\\n\\n **To provide additional evidence that the Fourier head provides concrete improvements for the original success metrics, we have added two more ablation studies to the paper:**\\n\\n* Ablation study #1: We show the Fourier head is better at learning high-quality next action distributions than the Decision Transformer with a Linear head *across all dataset sizes*. \\n[(link to graph)](https://drive.google.com/file/d/16qJsSBJ9xwT6PiqYqYco6pBkUiHaXX9x/view?usp=sharing)\\n* Ablation study #2: We show the Fourier head is better at learning high-quality next action distributions than the Decision Transformer with a Linear head *across all model sizes*. \\n[(link to graph)](https://drive.google.com/file/d/1-mnyWD4F1Rqgj3-xSVenRp17UWGAMGU1/view?usp=drive_link)\\n\\nIn these ablations, we show that when the Fourier head learns a very smooth density, the generalization is better. Intuitively, this is because if the model needs to learn the likelihood of the $N$\\u2019th token, it can lean on the learned likelihood of its neighboring $N-1$ and $N+1$ tokens, even if it saw very few examples during training where the $N$\\u2019th token was the correct answer.\"}", "{\"title\": \"Following up\", \"comment\": \"We wanted to follow up, and check if the additional ablation studies on scaling model size and dataset size for Chronos satisfied your concerns. Thanks again for your effort reviewing our paper, and for your engagement throughout the rebuttal period!\"}", "{\"comment\": \"In the abstract, the author claims that \\\"we can easily substitute for any linear layer if we want the outputs to have a more continuous structure.\\\" However, the output linear layer of methods like TimeLLM and GPT4TS could also be replaced with a Fourier head. I believe that it is not feasible to simply substitute Fourier head in TimeLLM and GPT4TS because they do not reconstruct the continuous vocabulary. (It is worth noting that LLM-based methods like TimeLLM and GPT4TS are applicable to zero-shot forecasting tasks, as demonstrated in Section 4.8 of GPT4TS. Although the datasets used differ, these LLM-based methods can perform well in out-of-domain tasks.)\\n\\nBesides LLM-based methods, other TS foundation models such as UNITS[1] and MOMENT[2] cannot replace their output layers with Fourier head either. The author does not adequately explain how to implement Fourier head in these scenarios, which I think limits their usability.\\n\\nChronos is also a TS foundation model, and compared to LLM-based methods, TS foundation models require significant computational resources as they need to trainning from scratch or fully tuning on large datasets. Their broad applicability remains unproven (whereas LLM-based methods could be quickly trained and validated). If constructing an ordered vocabulary is an essential part of the process, then this disadvantage cannot be ignored.\\n\\n[1] Gao S, Koker T, Queen O, et al. Units: Building a unified time series mode. NeurIPS2024\\n\\n[2] Goswami M, Szafer K, Choudhry A, et al. MOMENT: A Family of Open Time-series Foundation Models[C]//Forty-first International Conference on Machine Learning.\"}", "{\"comment\": \"I thank the authors for linking to the experiment from the Fourier basis density model paper. I'm less concerned with representing unbounded values with the Fourier head now.\\n\\nI believe it is not unreasonable to train larger models on more data than what is done in the Chronos experiment. For example, the 124M GPT-2 small model is frequently used to validate new methods within an academic compute budget. In addition to training at larger scales, one can also train at smaller scales to extrapolate a trend.\"}", "{\"title\": \"Author response\", \"comment\": \"We appreciate the reviewer\\u2019s engagement with us! In response to each point:\\n\\n---\\n\\n### $1/n^2$ regularization\\n\\nWe agree that interpreting the regularization term as the total squared variation is much more natural given its relation to smoothness. We have modified the manuscript to reflect this change (see e.g. updated Section 2.4).\", \"as_for_choosing_whether_to_increase_regularization_strength_or_decrease_the_number_of_fourier_frequencies\": \"tuning the regularization strength gives finer control on penalizing the model for high frequency content to increase smoothness, while still allowing some high frequency information. The number of frequencies is a more drastic handle on this tradeoff, as made explicit in our scaling law. Choosing which to tune needs to be determined for each application \\u2013 for applications in which the underlying distributions are representable in a few number of frequencies $N_0 \\\\ll m/2$, minimizing the number of frequencies until $N_0$ might be preferable in order to increase smoothness. On the other hand, in applications where a high number of frequencies are required to obtain a reasonable reconstruction of the distribution, we would prefer to maximize the number of frequencies while tuning the regularization strength in order to encourage smoothness while not sacrificing modeling capacity.\\n\\n---\\n\\n### Exposition of the Fourier head and the de la Fuente (2024) paper\\n\\nAs you noticed, we only tried to provide a minimal exposition of the Fourier Basis Density Model paper. Going over it now, we agree with you that our exposition was too minimal, so we\\u2019ve added additional references and explanations--\\n\\n* In the introduction, we\\u2019ve added the sentence: *The Fourier head is constructed using the Fourier Basis Density Model (de la Fuente et al, 2024).*\\n* When we define Fourier head algorithm, we\\u2019ve added the sentence: *The Fourier head is constructed using the Fourier Basis Density Model from (De la Fuente et al., 2024). For more details on the original method (e.g. justification for how learning the autocorrelation coefficients guarantees that the Fourier series has integral 1, and justification for normalizing the Fourier coefficients by $\\\\mathrm{Re}(c_0)$), we refer the author to (De la Fuente et al., 2024).*\\n\\n---\\n\\n### Point estimate vs probabilistic forecasting\\n\\nIn the paper, we only consider scenarios where probabilistic modeling is necessary, and where a point estimate is insufficient. We constructed the toy example with this in mind\\u2013the main success metric for the toy example is the KL divergence between the quantized ground truth distribution, and the learned categorical distribution. Similarly, the large-scale examples in the paper (probabilistic agentic decision making, and probabilistic time series forecasting) require learning a probability distribution over the latent space, and sampling from it at test time to obtain the success metrics.\\n\\nIn particular, **probabilistic time series forecasting is a useful tool for decision making because probabilistic forecasts allow us to precisely quantify future uncertainty.** We note that there are tradeoffs when deciding whether to model time series probabilistically versus deterministically: \\n\\n* *Deterministically* modeling time series (e.g. learning to regress, with an MSE loss) is generally simpler, especially during data preprocessing. For example, tokenization is not needed.\\n\\n* *Probabilistically* modeling time series (e.g. learning a distribution over the next value of the time series, using cross entropy loss, as in Chronos) is more complicated, as it requires design choices such as tokenization. But the upshot of these methods is that probabilistic forecasts contain all the information from deterministic forecasts, plus more. For example, from a probabilistic time series model, you can choose to sample many possible futures. Computing the median of those futures allows you to compute an accuracy metric like MSE. Additionally, you can extract error bars using the quantiles from the possible futures, which is a clear advantage for practical applications.\\n\\nAnd as you requested, we have added explicit descriptions for our large-scale tasks to make it clear that they involve probabilistic sampling--\\n* For the RL task, we added: *\\u201c At test time, the agent chooses its next action by sampling from the learned next-action distribution.\\u201d*\\n* For the time series task, we added: *\\u201cAt test time, the model chooses the next numerical token by sampling from the next-token distribution.\\u201d*\\n\\nPlease let us know if you have any further questions, or if there is anything that we can clarify!\"}", "{\"title\": \"Author response\", \"comment\": \"## Q4: why is the paper organized this way (specifically, with related works near the end)?\", \"we_ultimately_decided_that_we_wanted_the_paper_to_flow_from\": \"1. Motivating the desire to put a continuous structure over the next-token distribution (section 1, introduction)\\n2. Introducing our proposed method for putting a continuous structure over the next-token distribution (section 2, Fourier head)\\n3. Providing theoretical evidence that the Fourier head is capable of modeling complicated densities, explain the tradeoffs involved (section 3, Theory)\\n4. Demonstrating the Fourier head\\u2019s modeling capability in a low-dimensional intuitive example (section 4, toy example)\\n5. Demonstrating the Fourier head\\u2019s modeling capability in a large scale example (section 5, offline RL)\\n6. Demonstrating the Fourier head\\u2019s modeling capability in another large scale example (section 6, zero-shot probabilistic time series forecasting)\\n\\nWe reasoned that putting the related works between the motivation section, and the introduction of the Fourier head method, would break up this flow, so we opted to put it in the last section before the conclusion. Not all of us liked this, so our compromise was to have a \\u201cminimal\\u201d related works explanation embedded in the introduction; this is when we discussed the Decision Transformer, Chronos, and other instances where it could be beneficial to learn next-token distributions with a continuous structure. We\\u2019re very open to alternative ways of framing the story though, did you have any suggestions?\"}" ] }
4hFT4rfG40
Plug-and-Play Controllable Generation for Discrete Masked Models
[ "Wei Guo", "Yuchen Zhu", "Molei Tao", "Yongxin Chen" ]
This article makes discrete masked models for the generative modeling of discrete data controllable. The goal is to generate samples of a discrete random variable that adheres to a posterior distribution, satisfies specific constraints, or optimizes a reward function. This methodological development enables broad applications across downstream tasks such as class-specific image generation and protein design. Existing approaches for controllable generation of masked models typically rely on task-specific fine-tuning or additional modifications, which can be inefficient and resource-intensive. To overcome these limitations, we propose a novel plug-and-play framework based on importance sampling that bypasses the need for training a conditional score. Our framework is agnostic to the choice of control criteria, requires no gradient information, and is well-suited for tasks such as posterior sampling, Bayesian inverse problems, and constrained generation. We demonstrate the effectiveness of our approach through extensive experiments, showcasing its versatility across multiple domains, including protein design.
[ "Discrete Masked Models", "Controllable Generation", "Plug-and-play" ]
https://openreview.net/pdf?id=4hFT4rfG40
https://openreview.net/forum?id=4hFT4rfG40
ICLR.cc/2025/Conference
2025
{ "note_id": [ "gwtRkEWhzT", "ZOy10ZECxC", "SqjCr8cRke", "B6kTKjQ7Jm", "3z0VR9z7IQ" ], "note_type": [ "comment", "official_review", "official_review", "official_review", "official_review" ], "note_created": [ 1732762671944, 1730490838597, 1730687056579, 1729395705944, 1730675246750 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission12750/Authors" ], [ "ICLR.cc/2025/Conference/Submission12750/Reviewer_kZRj" ], [ "ICLR.cc/2025/Conference/Submission12750/Reviewer_rJsN" ], [ "ICLR.cc/2025/Conference/Submission12750/Reviewer_kUek" ], [ "ICLR.cc/2025/Conference/Submission12750/Reviewer_MPEF" ] ], "structured_content_str": [ "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"The paper proposes a method for generating conditional samples from a masked generative model. Assuming the existence of a reward model, the method draw approximate samples from the unnormalized density r(x)p(x) without requiring the generative model to be retrained.\\n\\nThe method applies the Sampling Importance Resampling (SIR) trick to obtain approximate samples from the target distribution over the course of the generative process.\\n\\nExperimental results demonstrate the concept on a toy problem as well as showcasing impressive results on a protein generation benchmark.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"2\", \"strengths\": \"The paper presents a well-justified method from conditional sampling the presence of a reward function. It details the assumptions it makes and it gives an intuition when/why someone would use this method for conditioning.\\n\\nIn terms of novelty, SIR is not novel, but its application to masked generative models for controllable generations is. I am not aware of other works that use this idea for masked generative models.\\n\\nThe paper is very well written and easy to understand. The motivation is clear, the method is well-explained and detailed. Figure 1 and Algorithm 1 give an excellent overview that makes it easy to implement.\\n\\nThe experimental results on protein generation are extensive. They show convincing results on two benchmarks: solubility and alpha-helix percentage. Furthermore, they include a qualitative assessment of protein in-painting.\", \"weaknesses\": \"A main weakness of the paper is the experimental results. The work is motivated by the versatility of the approach: they claim strong performance across multiple domains. However, experimental results only include protein generation benchmarks. There are not experiments on text, images or audio with the masked models that are discussed in the introduction.\\n\\nRegarding the protein benchmarks, there is no baseline to compare against and there are no ablation experiments.\\n* Baselines: It would be good to see how the method compares to naive fine-tuning approaches (while acknowledging that the proposed method is much lighter computationally).\\n* Ablations: The method does not have many hyperparameters to set, but it would be good to see how the generation quality depends on the number of Monte-Carlo samples used.\", \"questions\": \"Q: One of the motivations is that by setting r to be the p(y|x), one can sample from the Bayesian posterior p(x|y). How accurately can this method sample the Bayesian posterior and what kind of Bayesian inference problems can it be applied to?\\n\\nThe reasoning for my score is that I find the claims of effectiveness and versatility lack evidence.\\n* Effectiveness: The experiments have no baselines therefore its difficult to evaluate if the method is effective or not.\\n* Versatility: The method is only evaluated on a single domain (not counting the toy example).\\n\\nI am willing to increase my score if the authors argue or provide further evidence in support of these two claims in the rebuttal.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper tackles the problem of performing conditional generation of discrete structures via masked generative models. They propose a general-purpose approach for optimizing arbitrary objective functions during the generation process. Subsequently, they provide several simplifications and concrete modelling approaches to make the problem tractable and computationally efficient. Finally, they apply the methodology to a toy problem as well as a protein generation task.\", \"soundness\": \"1\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": [\"The paper tackles a broad category of problem; namely plug-and-play conditional generation using discrete masked models without the need for fine-tuning. Additionally, they lay out in which settings their methodology would be advantageous (for example, they indicate that this method is useful when evaluating the masked model is much more expensive to evaluate than the reward function).\", \"The authors make a good effort at making the paper reproducible by including source code of the algorithm (as well as detailed algorithm descriptions) in the appendix.\", \"Figure 1 is quite good and greatly facilitates understanding of the proposed methodology. In addition, the paper clearly describes the problem which they aim to solve, subsequently provides a concrete approach which makes the problem tractable, and performs some preliminary empirical validation.\"], \"weaknesses\": [\"__Theoretical Concerns__:\", \"Several key aspects of the paper lack a theoretical justification or are not derived in a principled manner. For example, the proposed reward equation $r(x) = \\\\exp\\\\left({-\\\\sum w_i \\\\text{dist}(m_i(x), A_i)^{\\\\alpha_i}}\\\\right)$ is provided with no theoretical grounding or explanation. As best I can tell, the definition of the sampling distribution $q(z) = Z^{-1} r(x)p(x)$ would require $r(x) \\\\geq 0$ in order for $q(x)$ to be a valid distribution. However, this is not mentioned, and many alternative reward functions could be used. I would like to see a more detailed explanation of why this reward function was chosen (either theoretical justification or empirical results).\", \"Similarly, the use of the mean-field approximation and importance sampling present several practical challenges which are not addressed. In the case of importance sampling, the results are heavily dependent on samples obtained from regions of high density, and thus may require many monte-Carlo samples if the proposal distribution is far from the true distribution. Furthermore, the mean-field-approximation assumes that the probabilities of the masked inputs are independent conditioned on the observed values. This is clearly not the case for domains such as images, which exhibit strong local structure. The paper would be much improved with additional analysis of the performance of the proposed methodology when the assumptions are violated and/or on larger-scale problems more representative of real-world use.\", \"The authors mention that the proposed method is beneficial when the complexity of querying the masked model is much higher than evaluating the reward function. Unfortunately, this is only true for trivial objective functions. For example, protein structures are typically optimized for a complex objective that is computed by another deep learning model (i.e. predicting biological activity, folding structure, etc.). This calls in question the applicability of the method to wider categories of problems, as most problems of interest will not have a closed form/cheap objective function.\", \"__Experimental Concerns__:\", \"In terms of the experimental validation, the experiments performed do not provide sufficient evidence that the methodology works as intended. First, the experiment using the toy problem uses a uniform masked model with a linear objective function. As expected, the proposed approach performs well given that the problem is explicitly formulated satisfy the mean-field approximation and importance sampling schemes. No attempt is made to characterize how the method performs as assumptions are violated. Furthermore, the protein experiments are conducted using objectives which are much too simple. GRAVY (Grand Average of hydropathy) is a simple sum of values per individual amino acid. Similarly, the instability index (Guruprasad et al., 1990), consists of summing values from a lookup table for pairs of amino acids. These objectives are simple enough that the assumptions of MFA and importance sampling are not violated, but are not representative in terms of computational costs or complexity of typical protein design tasks. Finally, an experiment is performed to optimize the helical fraction of the peptides. The objective used is not clearly defined in the paper, but validation is performed using ESM3. Consequently, if ESM3 is used for the helical fraction objective, then the objective would not be cheap to evaluate, and the initial assumptions made by the paper are violated. Overall, the paper would benefit from more extensive and principled empirical validation in settings more representative of how the methodology would be used in practice.\", \"Another aspect of the experimental results is that both the toy problem and the protein design task consist of relatively simple 1-dimensional discrete structures. I would need to see this methodology applied to more complex discrete structures such as 2D image generation or graph structures (such as per-atom molecule design) in order to validate some of the wider-scope claims made.\", \"In terms of presentation, many of the figures would benefit from more detailed captions to clearly present what is being shown. For example, figure 2 seems to imply that additional monte-carlo samples enable the algorithm to achieve a high degree of success when optimizing the objective, however this is only briefly touched upon in the main text, and not at all addressed in the caption. Additionally, figures 5/6 are quite visually crowded and hard to parse. As these figures occur in the appendix, the authors could take more space to make sure that the results are clearly and unambiguously presented.\", \"__Contribution Concerns__\", \"The main contribution of the paper seems to be the introduction of the sampling distribution $Z^{-1} r(z)p(x)$, and then using MFA and importance sampling to sample from this distribution. This is not a novel methodology and is well known in various Bayesian settings. To accept the paper, there would need to be a more significant theoretical contribution. Additionally, there exists pre-existing plug-and-play samplers for continuous diffusion models, this paper extends plug-and-play samplers to discrete masked models, and this does not present a significantly novel framework for conditional generation.\"], \"questions\": [\"I have several questions regarding the content of the paper:\", \"What was the metric used for computing/conditional generation when optimizing the helical fraction?\", \"How is the reward function on page 8 derived? Additionally, why are the intervals for the metrics sometimes closed (i.e. instability with $A = [0, 40]$, and sometimes unbounded (i.e. helix % with $A = [0.8, \\\\infty)$), and in what settings is bounded/unbounded preferable?\", \"Are the helical fractions correct in the protein experiment? In figure 3, the bottom two proteins seem almost identical, yet one has a helix fraction of 0.78, and the other 0.44. This does not seem quite correct.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N/A\", \"rating\": \"1\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes a method to enable controllable generation given any unconditional generative model and a reward function as conditioning signal. This is done through computing importance weights using Monte-Carlo estimates and evaluates resulting samples using the given reward function. The authors demonstrate the effectiveness of their method using a toy dataset and in the context of conditional protein generation.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"1\", \"strengths\": [\"The paper is generally well-written, and clear\", \"The method is relatively simple, and easy to implement\", \"The method only requires an unconditional model, and can be used to controllably generate from any conditional distribution given its corresponding reward function\"], \"weaknesses\": [\"The novelty is relatively low, as importance sampling has been very well studied in prior works. Although to my knowledge, I have not seen it applied it in the context of controllable generation, the experiments do not well demonstrate the effectiveness of the proposed method\", \"Core experiments are on relatively easy (low-dim) distributions, and it is unclear as to how this method scales. How well does the method work for more complex distributions, e.g. for images, longer sequence proteins, etc? Do you need significantly more Monte Carlo samples?\", \"The method quite heavily relies on a good reward function -- which, in general may be difficult to properly specify. How does performance depend on how well the reward function is shaped?\"], \"questions\": \"See Weaknesses\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This model presents a method for controllable generation given a pre-trained (discrete) masking model without further training. Given a reward function, the method iteratively applies masking and remasking along with a mean-field approximation and importance sampling to perform controlled (e.g., conditioned on a class variable) generation in a \\\"plug-and-play\\\" manner. The work lays out the grounding theory and connections to (continuous and discrete) diffusion models, motivates the approach, and demonstrates the approach both on toy sequential data and protein generation (inpainting and class conditioned) tasks.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"Overall the paper is very well written. The motivation of the problem, controllable discrete masked model generation without training, is good, as this implies flexible controllable generation without additional computational overhead of training for each controlled generation task. The theory appears to be sound to me without any errors, arriving at the mean field approximation with importance sampling, which seems to be a reasonable approach and yields decent results on both the toy task and protein generation tasks. The paper does an excellent job presenting the work as close to diffusion models, which makes the theory sections easy to read. The experiments for the most part are well motivated and the results do support the usefulness of the approach to some controllable settings.\", \"weaknesses\": \"No limitations are presented in the paper, and it seems like there may be some worth discussing. One is reward function design, as it's unclear whether some tasks may not have difficult to design reward functions or if there's a high dependence on reward function on success. The next is that the Monte Carlo samples seem to be quite high, the performance in figure 10 seems to indicate that even at 10k samples the model is still improving. There really should be more of a discussion about this limitation, which I believe is likely due to either the mean field approximation or the remasking schedule, but neither of these limitations / issues are discussed to any significant degree. Finally, I wonder why we're only looking at protein sequence generation as the task: why not also look at some natural language applications?\", \"questions\": \"1) Why did you choose to look only at protein sequences and not natural language for controllable generation tasks?\\n2) Is there a relationship between the number of MC samples needed and the mean field approximation or the remasking schedule? For example, if gamma is too high (or too low), does it take more samples to achieve high final reward?\\n3) What are some of the limitations of this model w.r.t. the reward design? What characterizes a controllable generative task for which reward design / success would be easy / hard?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}" ] }
4gaySj8kvX
Accelerating Goal-Conditioned Reinforcement Learning Algorithms and Research
[ "Michał Bortkiewicz", "Władysław Pałucki", "Vivek Myers", "Tadeusz Dziarmaga", "Tomasz Arczewski", "Łukasz Kuciński", "Benjamin Eysenbach" ]
Self-supervision has the potential to transform reinforcement learning (RL), paralleling the breakthroughs it has enabled in other areas of machine learning. While self-supervised learning in other domains aims to find patterns in a fixed dataset, self-supervised goal-conditioned reinforcement learning (GCRL) agents discover *new* behaviors by learning from the goals achieved during unstructured interaction with the environment. However, these methods have failed to see similar success, both due to a lack of data from slow environment simulations as well as a lack of stable algorithms. We take a step toward addressing both of these issues by releasing a high-performance codebase and benchmark (`JaxGCRL`) for self-supervised GCRL, enabling researchers to train agents for millions of environment steps in minutes on a single GPU. By utilizing GPU-accelerated replay buffers, environments, and a stable contrastive RL algorithm, we reduce training time by up to $22\times$. Additionally, we assess key design choices in contrastive RL, identifying those that most effectively stabilize and enhance training performance. With this approach, we provide a foundation for future research in self-supervised GCRL, enabling researchers to quickly iterate on new ideas and evaluate them in diverse and challenging environments. Code: [https://anonymous.4open.science/r/JaxGCRL-2316/README.md](https://anonymous.4open.science/r/JaxGCRL-2316/README.md)
[ "Deep Reinforcement Learning", "GPU-accelerated Physics Simulators", "Contrastive Learning", "Unsupervised Reinforcement Learning" ]
Accept (Spotlight)
https://openreview.net/pdf?id=4gaySj8kvX
https://openreview.net/forum?id=4gaySj8kvX
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zkHMtKBkWw", "wF7fPvXpE7", "ulbKPsaPMY", "tMyTCYXKeS", "qJIsl7xtgS", "ouc9GxNfS1", "kZnRWXSgxk", "jxJIq6S4w0", "cLBkICjal8", "bNtmrqFsOp", "YfCBykhZlX", "XUDiv3n7Ab", "TQy6vnSqkv", "Now6Q54lR6", "KV6ZAxzi1r", "DsyfoIWx9k", "DsGVi8phUW", "AwYIp1HXms", "8XAr169EeI", "5ehFh8pyRT", "5KZhUmlMGw" ], "note_type": [ "official_review", "official_comment", "official_comment", "official_comment", "meta_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "decision", "official_comment" ], "note_created": [ 1730566651269, 1732209400526, 1732123116742, 1731872851836, 1734567047376, 1730405552951, 1732089633239, 1732353355253, 1732030680993, 1731932224105, 1731872551032, 1731873420682, 1732278427088, 1732133736952, 1730496677670, 1731872021862, 1732212165069, 1731870555638, 1730618173949, 1737523580116, 1732342771822 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission3502/Reviewer_j3bj" ], [ "ICLR.cc/2025/Conference/Submission3502/Authors" ], [ "ICLR.cc/2025/Conference/Submission3502/Reviewer_3DLK" ], [ "ICLR.cc/2025/Conference/Submission3502/Authors" ], [ "ICLR.cc/2025/Conference/Submission3502/Area_Chair_KGyT" ], [ "ICLR.cc/2025/Conference/Submission3502/Reviewer_3DLK" ], [ "ICLR.cc/2025/Conference/Submission3502/Reviewer_MvLy" ], [ "ICLR.cc/2025/Conference/Submission3502/Reviewer_j3bj" ], [ "ICLR.cc/2025/Conference/Submission3502/Authors" ], [ "ICLR.cc/2025/Conference/Submission3502/Reviewer_j3bj" ], [ "ICLR.cc/2025/Conference/Submission3502/Authors" ], [ "ICLR.cc/2025/Conference/Submission3502/Authors" ], [ "ICLR.cc/2025/Conference/Submission3502/Authors" ], [ "ICLR.cc/2025/Conference/Submission3502/Authors" ], [ "ICLR.cc/2025/Conference/Submission3502/Reviewer_fgPt" ], [ "ICLR.cc/2025/Conference/Submission3502/Authors" ], [ "ICLR.cc/2025/Conference/Submission3502/Reviewer_3DLK" ], [ "ICLR.cc/2025/Conference/Submission3502/Authors" ], [ "ICLR.cc/2025/Conference/Submission3502/Reviewer_MvLy" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission3502/Reviewer_fgPt" ] ], "structured_content_str": [ "{\"summary\": \"This paper introduces JaxGCRL, a codebase that contains environments and a scalable goal-conditioned RL algorithm, all implemented in JAX. This allows researchers to train GC agents much faster than before, making these experiments more accessible. This work also analyses several design decisions of contrastive RL algorithms, enabled by a fast simulator & algorithm implementation.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"Adding even more environments and algorithms to the JAX ecosystem is great, especially for goal-conditioned RL which is lacking in this space.\", \"The proof-of-concept experiments demonstrate what this library can allow, namely, more thorough investigation of design decisions in goal-conditioned RL\", \"The writing and motivation is quite clear.\"], \"weaknesses\": [\"I can't see any major weaknesses, apart from the limited number of environments, although 8 is pretty respectable.\"], \"questions\": [\"What is your support plan going forward with JaxGCRL, are you planning on adding new environments or algorithms?\", \"It seems like JaxGCRL is very much focused on brax-type environments, is there a part of goal conditioned RL research that potentially focuses rather on discrete action environments that you are leaving out?\", \"What about other non-contrastive GCRL algorithms? Are you planning on adding support for those?\", \"Relatedly, how easy would it be for someone else to implement a new GCRL algorithm to fit within your framework?\", \"And how easy is it to add another goal conditioned environment, based on an existing JAX environment? For instance, minigrid or craftax or xland minigrid, etc?\", \"In the maze, for instance, can you dynamically, within the JIT, change the maze layout, or does it have to be statically known at compile time?\", \"Is there an easy way to transfer a JaxGCRL agent to existing environments?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Reviewer,\\n\\nThanks for continuing the discussion. To address your concern regarding the benchmark, we are introducing two challenging environments: Ant Hardest Maze and Humanoid Big Maze, where the success rates are ~30% and ~40%, respectively (see Figures 13 and 14).\\n\\nWe also want to clarify that the agent is optimized for the time near the goal (see [`actor_loss` function](https://anonymous.4open.science/r/JaxGCRL-2316/src/losses.py)). According to the original [contrastive RL paper](https://arxiv.org/pdf/2206.07568):\\n\\n> \\\"Intuitively, this objective corresponds to sampling a goal s_g and then optimizing the policy to go to that goal and stay there.\\\" (bottom of page 3)\\n\\nThus, the agent's failure to stabilise around the goal does mean that the agent is not doing a great job optimizing the objective. For example, we can look at a video of the [humanoid task](https://anonymous.4open.science/r/paperJaxGCRL-EFEB/README.md): the humanoid simply \\\"flings\\\" itself at the goal, whereas the optimal policy would be to run towards the goal and stand there. We know that methods with dense rewards can learn this behavior, but are unaware of goal-conditioned methods (i.e., given just a goal and no dense rewards) that can learn this behavior. We have revised Sec A.6 to clarify this.\\n\\n> On the point of the update-to-data ratio, I do not agree with the authors' understanding of the prior work. The argument in those papers (to my knowledge) is not that a high update-to-data ratio is always good, but that prior work had been forced to use a low update to data ratio (because otherwise performance collapses or isn't as good etc.), thereby harming their achievable sample efficiency. Therefore by coming up with a method (presented in those papers) that can achieve high update-to-data ratio, they should expect gains in sample efficiency. Those papers are also specific to off-policy Q-learning algorithms to my knowledge. Off-policy methods should be able to achieve a higher update-to-data ratio. An on-policy algorithm (such as PPO) is clearly going to collapse with a high update-to-data ratio. Given that your method seems (at least to me) to be on-policy, it is not clear to me why it would be expected that such an update-to-data result would hold, and therefore why they are in the paper.\\n\\nThank you for clarifying these points. Based on your feedback, we have revised Section 5.6. Additionally, we have updated the description of our UTD experiment to highlight that it serves as an example of a typically computationally expensive experiment that runs significantly faster using our code. The aim of these experiments is not to make a claim about whether the phenomenon we observe is the same/different from those observed in off-policy methods.\\n\\nDo these further revisions fully address the reviewer's concerns with the paper? If not, we would be happy to run additional experiments and revise the paper further.\\n\\nKind regards,\\n\\nAuthors\"}", "{\"title\": \"Thank you for your response\", \"comment\": \"I'd like to thank the authors for their additional experiments and response to my questions. I appreciate the running of the experiments with the larger model and thank the authors for their efforts.\\n\\nI'm not sure that the current experiments do address my concern -- if this is to be a meaningful benchmark, the there should be significant headroom on some of the tasks, so that progress can be measured without resorting to the ceiling effect. As I see it the success rates are all quite high still. I understand that the agents fail to stabilise near the goal, but I am unsure why this is a significant problem? Does this indicate that the policies are very far from optimal? Is it possible to be at the goal for a large proportion of the time? I could see that when, for example, solving a maze, it would take a long time to find the goal, and therefore time near goal is likely to be quite small. To my knowledge all that is 'rewarded' here is achieving the goal right? If you were optimising for time near the goal it would be a valid concern to complain about insufficient time near the goal, but you are not as far as I understand it -- just reaching it in the first place.\\n\\nAdding a couple of harder environments with worse success rates would alleviate my concerns -- I would then raise my score, but if the current policies can also be shown to be clearly far from optimal, that would also provide sufficient evidence \\n\\nOn the point of the update-to-data ratio, I do not agree with the authors' understanding of the prior work. The argument in those papers (to my knowledge) is not that a high update-to-data ratio is *always good*, but that prior work had been *forced* to use a low update to data ratio (because otherwise performance collapses or isn't as good etc.), thereby harming their achievable *sample efficiency*. Therefore by coming up with a method (presented in those papers) that *can achieve* high update-to-data ratio, they should expect gains in sample efficiency. Those papers are also specific to off-policy Q-learning algorithms to my knowledge. Off-policy methods should be able to achieve a higher update-to-data ratio. An on-policy algorithm (such as PPO) is clearly going to collapse with a high update-to-data ratio. Given that your method seems (at least to me) to be on-policy, it is not clear to me why it would be expected that such an update-to-data result would hold, and therefore why they are in the paper.\\n\\nI'd also like to note my disagreement with the documentation requirements laid out by reviewer MvLy. While I think that documentation is important, and all the steps suggested by the reviewer would be helpful, I think that the suggestions are clearly above the required level of documentation for previous JAX frameworks accepted at top ML conferences [1, 2]\\n\\n[1] Matthews et al. Craftax: A Lightning-Fast Benchmark for Open-Ended Reinforcement Learning\\n[2] Rutherford et al. JaxMARL: Multi-Agent RL Environments in JAX\"}", "{\"comment\": \"We thank the reviewer for their time and efforts in working on the review and for their kind words on our work.\\n\\n>The paper mentions it leverages the power of GPU-accelerated simulators, but by comparing against the brax training code under https://github.com/google/brax/tree/main/brax/training, there are some similarities for the training code as well, and it's not mentioned in the paper.\\n\\nWe thank the reviewer for this comment. Our work does build upon Brax, extending the prior work to develop a new benchmark for goal-conditioned RL tasks. In contrast, Brax focuses on single-reward tasks. We accordingly modify Section 5.1 Experimental Setup to indicate that JaxGCRL implementation is based on SAC implementation from Brax.\\n\\n> In Sec 5.3, why is the contrastive objective only evaluated on part of the 8 environments? Similar question in sec 5.6 for examining different UTD ratios.\", \"finite_compute_resources\": \"Figures in these sections are already the result of 800+ training runs. In the updated [manuscript](https://anonymous.4open.science/r/paperJaxGCRL-EFEB/Accelerating_Goal_Conditioned_RL_Algorithms_and_Research_rebuttal.pdf), we have included Ant Push in the energy function experiments. In the coming days, we will also add this environment to the contrastive objective and UTD ratio sections.\\n\\n> In Fig 1. Are the num_actors same for the JaxGCRL and CRL?\\n\\nNo. For a fair comparison, we tuned the `num_actors` for both JaxGCRL and CRL and reported the best results for both.\\n\\n> How do you define if the agent is within goal's proximity?\\n\\nWe have revised Section 5.1 (Experimental Setup) to direct readers to Table 1, which provides detailed definitions of proximity for each task.\"}", "{\"metareview\": \"This paper proposes a new benchmark for self-supervised goal-conditioned RL (GCRL) which is optimized for running on a GPU and can thus yield high-throughout experiments. The authors benchmark a contrastive RL method against other popular RL algorithms and demonstrate strong gains.\\n\\nThe main weaknesses raised during the review process were around differentiation with Brax, and the difficulty of the provided environments. The authors seem to have addressed all the reviewer concerns.\\n\\nThe reviewers are unanimous in recommending acceptance for this work, and after going through the paper and discussion, I agree. \\nA minor comment is that there is a broken reference in line 33.\", \"additional_comments_on_reviewer_discussion\": \"There was a good discussion between reviewers and authors, and it seems all reviewers were satisfied with the rebuttal and changes provided by the authors.\"}", "{\"summary\": \"The authors introduce JaxGCRL, a benchmark and framework for evaluating goal-conditioned RL algorithms based on contrastive learning.\\nThey re-implement a number of goal-conditioned tasks from prior literature and evaluate their implementation on it. \\n\\nThey then evaluate the effect of different losses, more samples, and larger networks on their implementation. They demonstrate that their Jax-based implementation is significantly faster than previous libraries, accelerating future research.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The paper has several significant strengths.\", \"Although not novel, JAX implementations are to be commended. They improve research iteration speed significantly.\", \"The paper is very well written. The authors communicate their results clearly and unambiguously.\", \"The authors evaluate using the inter-quartile mean and bootstrapped confidence intervals. This is more sound than using learning curves etc.\", \"The authors provide a number of ablations and experiments that explain the performance of their implementation.\"], \"weaknesses\": [\"However, I have a number of issues with this paper, which is why I recommend rejection.\", \"The authors claim that their setting is challenging, but do not effectively demonstrate that this is the case. The authors demonstrate that by using a bigger network (1024 layer width and depth of 4) and layer norm, the performance significantly improves. They also run experiments where they train for significantly more interactions. However, as best I can tell (it is not always clear which network is used in which experiment), the authors never run their biggest, highest performing network for 300M steps on all the tasks. The authors do not pitch their work as focussing on sample efficiency, and therefore I am not sure why their evaluation framework should be compelling if the tasks can be solved by scaling up networks and using more samples. If the authors can provide a demonstration that this does not satisfactorily solve their benchmark, **I will raise my score**. However, without this demonstration, I do not believe that the experimental insights and JAX implementation are enough to warrant acceptance.\", \"I am confused about the experiments concerning the update-to-date ratio (UTD). Given a fixed step budget, doing fewer or more updates is a pure trade-off. You can do fewer, less noisy updates, or do more, noisier updates. This occurs all over RL, for example when choosing the number of parallel environments to use in PPO. I am not sure why a high or low number of updates would be beneficial, or this quantity would be interesting to examine.\"], \"i_also_have_a_number_of_more_minor_points\": [\"The authors claim that they cannot directly compare brax and mujoco because brax uses a different physics engine, but the MuJoCo physics engine has been available in brax for a while now [1] -- what exactly is the issue here?\", \"The discussion of related work on jax-based environments is missing some work. Gymnax [2] and PureJaxRL [3], both were important landmarks in the use of and benefits of JAX in RL and warrant inclusion.\", \"The authors should probably rephrase line 117, which begins with \\\"In addition to reducing CPU-GPU data-transfer overhead...\\\". While implementing an environment in JAX *does* do this, there are also significant other factors such as JIT compilation and the resulting operator fusion and the ability to use more vectorised environments than a typical CPU + pytorch approach that lead to the significant speedups.\", \"A number of the papers listed in the appendix have incorrect citations or are missing authors.\", \"Line 1032 in the appendix contains a typo (lenght -> length)\"], \"questions\": \"See weaknesses.\\n\\n[1] Brax documentation. https://github.com/google/brax?tab=readme-ov-file#one-api-four-pipelines\\n\\n[2] Gymnax: A JAX-based Reinforcement Learning Library. Robert Lange. https://github.com/RobertTLange/gymnax\\n\\n[3] Lu, Chris, et al. \\\"Discovered policy optimisation.\\\" Advances in Neural Information Processing Systems 35 (2022): 16455-16468. https://github.com/luchris429/purejaxrl\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": [\"Thank you for adding the baselines, updating the README, and beginning work on the documentation. This is a good start, but I think there is still quite a bit more work to be done before this library is polished enough for public release. As a show of good faith, I am prepared to update my score if the reviewers promise to complete the following by the camera ready deadline:\", \"Host the documentation on `readthedocs` or alternative\", \"Write proper documentation for each loss function in losses.py (link to the original paper, what it is doing, how it works)\", \"Provide docstrings for all user-facing functions\", \"Annotate all configuration variables with their meaning (for example, those in `training.py`)\", \"Make sure the argparser spits out these variables and associated annotations when `--help` is passed to `training.py`\", \"Add a tutorial on implementing a new loss and using a custom model architecture\", \"Add a unit test that checks for breakages in all environments and wrappers that inherit from `brax` (possibly by initializing each and running for a few timesteps)\"]}", "{\"comment\": \"Great, thank you for elaborating. I will keep my score of 8, as I believe this paper should be accepted.\"}", "{\"comment\": \"Taking a trained agent in JaxGCRL agent (say for MJX ant maze) and deploying it in a non-jax, gymnasium-based ant-maze environment is straightforward to implement. Running the agent in the Gymnasium-based environment would only require loading the models trained with JaxGCRL. In fact, in the context of single-task RL, prior work has already done effectively the same thing; Humanoid Bench [1] first trains a one-hand reaching policy using massively parallelized PPO with MuJoCo MJX, and later adapts that policy to more challenging tasks (simulated in classical MuJoCo) with a higher number of potential collisions.\\n\\n[1] [https://humanoid-bench.github.io/](https://humanoid-bench.github.io/)\"}", "{\"comment\": \"Thank you for your response! I do appreciate the addition of the new environments, new algorithms, and the new docs.\\n\\n\\n> We would like to further inquire about the specific environments the reviewer refers to. \\n\\n\\nWhat I had in mind here was, for instance, doing fast training using JaxGCRL, and then using that trained agent in current, non-jaxified environments (e.g. what researchers were using before). I guess the response about MJX vs Mujoco does answer this, in that transferring agents directly may not result in amazing performance due to the slight dynamics differences between the two physics engines. Disregarding that, however, how involved would the coding/porting need to be if I wanted to export a JaxGCRL agent (say for MJX ant maze) and deploy it on a non-jax, gymnasium-based ant-maze environment?\"}", "{\"comment\": \"We thank the reviewer for their time and efforts in working on the review and for their kind words on our work.\\n\\n> I can't see any major weaknesses, apart from the limited number of environments, although 8 is pretty respectable.\\n\\nSince the initial submission, we have added 5 new tasks that involve robotic manipulation and one harder ant maze environment. \\n\\n> What is your support plan going forward with JaxGCRL, are you planning on adding new environments or algorithms?\\n\\nOur support plan assumes active maintenance of JaxGCRL in the following months, focusing on gaining the necessary visibility and contributors. We realise that JaxGCRL will only provide value to the community if it becomes the out-of-the-shelf solution for custom GCRL experiments. To improve usability, we enhanced the repository by adding detailed documentation, multiple examples, and an updated `README.md` file.\\n\\n> It seems like JaxGCRL is very much focused on brax-type environments, is there a part of goal conditioned RL research that potentially focuses rather on discrete action environments that you are leaving out?\\n\\nYes, there's a good bit of prior work on GCRL in settings with discrete actions [1-4]. However, our focus is on the likewise well-studied problem of GCRL in settings with continuous actions [5,6]. \\n\\n\\n> What about other non-contrastive GCRL algorithms? Are you planning on adding support for those?\\nRelatedly, how easy would it be for someone else to implement a new GCRL algorithm to fit within your framework?\\nAnd how easy is it to add another goal conditioned environment, based on an existing JAX environment? For instance, minigrid or craftax or xland minigrid, etc?\\n\\nWe have added additional non-contrastive GCRL algorithms, including PPO and TD3 ([Figure 3](https://anonymous.4open.science/r/paperJaxGCRL-EFEB/Accelerating_Goal_Conditioned_RL_Algorithms_and_Research_rebuttal.pdf)). Adding a new algorithm is relatively easy because changes should concern mostly `networks.py` and `losses.py` files. Adding a novel, already working, MJX-based environment to JaxGCRL is also straightforward. We modified `README.md` to describe this process in greater detail.\\n\\n> In the maze, for instance, can you dynamically, within the JIT, change the maze layout, or does it have to be statically known at compile time?\\n\\nCurrently, dynamically changing the maze layout is not supported, and the layout must be provided at the beginning of the experiment.\\n\\n> Is there an easy way to transfer a JaxGCRL agent to existing environments?\\n\\nWe would like to further inquire about the specific environments the reviewer refers to. It is easy to transfer the JaxGCRL agent to other MJX-based environments. Environments created for MuJoCo can be, in most cases, adapted for MJX (as explained in the Feature Parity document [7]). In some situations, migration can be challenging since it requires rewriting the logic of the environment in JAX and thus requires some technical knowledge.\\n\\n[1] Chevalier-Boisvert, M., Dai, B., Towers, M., Lazcano, R. de, Willems, L., Lahlou, S., Pal, S., Castro, P. S., & Terry, J. (2023, June 24). Minigrid & Miniworld: Modular & Customizable Reinforcement Learning Environments for Goal-Oriented Tasks. http://arxiv.org/abs/2306.13831\\n\\n[2] Hoang, C., Sohn, S., Choi, J., Carvalho, W., & Lee, H. (2021, November 18). Successor Feature Landmarks for Long-Horizon Goal-Conditioned Reinforcement Learning. http://arxiv.org/abs/2111.09858\\n\\n[3] Nikulin, A., Kurenkov, V., Zisman, I., Agarkov, A., Sinii, V., & Kolesnikov, S. (2024, February 6). XLand-MiniGrid: Scalable Meta-Reinforcement Learning Environments in JAX. http://arxiv.org/abs/2312.12044\\n\\n[4] Liu, M., Zhu, M., & Zhang, W. (2022, September 2). Goal-Conditioned Reinforcement Learning: Problems and Solutions. http://arxiv.org/abs/2201.08299\\n\\n[5] Chane-Sane, E., Schmid, C., & Laptev, I. (2021, July 1). Goal-Conditioned Reinforcement Learning with Imagined Subgoals. http://arxiv.org/abs/2107.00541\\n\\n[6] Nasiriany, S., Pong, V. H., Lin, S., & Levine, S. (2019, November 19). Planning with Goal-Conditioned Policies. http://arxiv.org/abs/1911.08453\\n\\n[7] https://mujoco.readthedocs.io/en/3.0.1/mjx.html#feature-parity\"}", "{\"comment\": \"We thank the reviewer for their time and feedback on the work. It seems like the reviewer's main concern is that the tasks in the benchmark may not be difficult enough. To test this hypothesis, we trained our biggest network (1024 layer width and depth of 4 and layer norm) for 300M steps on all the tasks. This scaled-up model achieves over a 50% [success rate](https://anonymous.4open.science/r/paperJaxGCRL-EFEB/success_rate.png) on the proposed tasks. However, it [struggles to stabilise at the goal](https://anonymous.4open.science/r/paperJaxGCRL-EFEB/time_near_goal.png); for instance, on five tasks, the best-performing agent spends less than 50% of an episode at the goal. *Does this address the reviewer's concern about the tasks not being difficult enough?* If not, we are happy to run additional experiments or add additional environments. One example of such an environment is the Ant Hard Maze, where the scaled-up CRL model fails to achieve a success rate of 50%.\\n\\n> I am confused about the experiments concerning the update-to-date ratio (UTD). Given a fixed step budget, doing fewer or more updates is a pure trade-off. You can do fewer, less noisy updates, or do more, noisier updates. This occurs all over RL, for example when choosing the number of parallel environments to use in PPO. I am not sure why a high or low number of updates would be beneficial, or this quantity would be interesting to examine.\\n\\nWe include this ablation experiment because prior work [1-3] has found that UTD can be important for certain RL algorithms. Unlike prior work, we found that CRL with a low UTD works **better** on several tasks.\\n\\n> The authors claim that they cannot directly compare brax and mujoco because brax uses a different physics engine, but the MuJoCo physics engine has been available in brax for a while now [1] -- what exactly is the issue here? \\n\\nThere are some unfortunate naming conventions, where \\\"Brax\\\" refers to a physics _library_ that now includes several different physics _engines_: Positional, Spring, Generalised, and MJX for a while. The \\\"original\\\" Mujoco ant environments used the MuJoCo physics engine. In contrast, the Brax `Ant` environments use the Spring physics engine by default, so performance on one benchmark isn't the same as performance on the other. \\nWe would also like to note that not all Brax environments currently support MJX [5]. We encountered issues running Ant or Pusher with the MJX backend out-of-the-box. Additional tuning of the MJX physics engine may be required in these environments.\\n\\n> - The discussion of related work on jax-based environments is missing some work. Gymnax [2] and PureJaxRL [3], both were important landmarks in the use of and benefits of JAX in RL and warrant inclusion.\\n> - The authors should probably rephrase line 117, which begins with \\\"In addition to reducing CPU-GPU data-transfer overhead...\\\". > - > - While implementing an environment in JAX does do this, there are also significant other factors such as JIT compilation and the resulting operator fusion and the ability to use more vectorised environments than a typical CPU + pytorch approach that lead to the significant speedups. \\n> - A number of the papers listed in the appendix have incorrect citations or are missing authors.\\n> - Line 1032 in the appendix contains a typo (lenght -> length)\\n\\n\\nWe appreciate the reviewer for pointing out these issues. We have fixed all of them in the [new manuscript version](https://anonymous.4open.science/r/paperJaxGCRL-EFEB/Accelerating_Goal_Conditioned_RL_Algorithms_and_Research_rebuttal.pdf).\\n\\n\\n[1] D\\u2019Oro, P., Schwarzer, M., Nikishin, E., Bacon, P.-L., Bellemare, M. G., & Courville, A. (2022, September 29). Sample-Efficient Reinforcement Learning by Breaking the Replay Ratio Barrier. The Eleventh International Conference on Learning Representations. https://openreview.net/forum?id=OpC-9aBBVJe\\n\\n[2] Schwarzer, M., Obando-Ceron, J., Courville, A., Bellemare, M., Agarwal, R., & Castro, P. S. (2023, June 9). Bigger, Better, Faster: Human-level Atari with human-level efficiency. http://arxiv.org/abs/2305.19452\\n\\n[3] Nauman, M., Ostaszewski, M., Jankowski, K., Mi\\u0142o\\u015b, P., & Cygan, M. (2024, May 25). Bigger, Regularized, Optimistic: Scaling for compute and sample-efficient continuous control. http://arxiv.org/abs/2405.16158\\n\\n[4] Spring Backend https://github.com/google/brax?tab=readme-ov-file#one-api-four-pipelines:~:text=and%20collision%20constraints.-,Spring,-provides%20fast%20and\\n\\n[5] MJX support https://github.com/google/brax/discussions/409#:~:text=We%20will%20work%20to%20port%20MJX%20into%20Brax%20as%20another%20physics%20pipeline\"}", "{\"comment\": \"Dear Reviewer MvLy\\n\\nWith the rebuttal deadline approaching, we kindly ask if you could strengthen support for the paper, considering the changes we have already implemented and our commitment to addressing all your suggested improvements. Please let us know if you have any further questions, and we will respond promptly.\\n\\nKind regards,\\n\\nAuthors\"}", "{\"comment\": \"We thank the reviewer for this actionable feedback. We have already completed the following improvements:\\n\\n> - Write proper documentation for each loss function in losses.py (link to the original paper, what it is doing, how it works).\\n> - Annotate all configuration variables with their meaning (for example, those in training.py)\\n> - Make sure the argparser spits out these variables and associated annotations when --help is passed to training.py\\n\\nThe changes are in the updated [anonymous code](https://anonymous.4open.science/r/JaxGCRL-2316/README.md).\\n\\nWe are committed to implementing all the remaining proposed changes before the camera-ready deadline.\"}", "{\"summary\": \"The paper provides a JIT-complied codebase with vectorized environments that can speed up the training and iterating new ideas on goal-conditioned reinforcement learning problems.\\nIn additional, it provides a stable baseline algorithm for the goal-conditioned reinforcement learning problems that's benchmarked in the 8 diverse continuous environments.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"4\", \"strengths\": [\"The JaxGCRL codebase is significantly faster than the original codebase.\", \"The proposed baseline consistently outperform the counterpart in all 8 environments, demonstrating the stableness from simple to complex environments.\", \"The performance of different design choice is extensively tested and the result metric is easy to interpret.\"], \"weaknesses\": \"The paper mentions it leverages the power of GPU-accelerated simulators, but by comparing against the brax training code under https://github.com/google/brax/tree/main/brax/training, there are some similarities for the training code as well, and it's not mentioned in the paper.\", \"questions\": [\"In Sec 5.3, why is the contrastive objective only evaluated on part of the 8 environments? Similar question in sec 5.6 for examining different UTD ratios.\", \"In Fig 1. Are the num_actors same for the JaxGCRL and CRL?\", \"How do you define if the agent is within goal's proximity?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We thank the reviewer for their time and for reviewing our manuscript. It seems like the reviewer's main concern is facilitating the repository's adoption and ensuring that it is maintained. As one step towards addressing this concern, we have improved the repository's usability by adding new documentation, providing several examples, and enhancing the README.md file. It is worth mentioning that, internally, we have seen several new collaborators adopt our codebase for use in their research. This includes researchers from 5 institutions and one company. Helping these new users adopt the code has also improved the benchmark, resulting in 10+ pull requests and 6 new environments. **Together with the discussion below, does this address the reviewer's concerns about the benchmark?** If not, we are happy to revise the paper further and add additional features to the benchmark. We look forward to continuing the discussion!\\n\\n> Long term plan for maintaining the project\\n\\nWe appreciate the reviewer\\u2019s feedback. We recognise that only a few reinforcement learning (RL) packages stand the test of time, but we are committed to making JaxGCRL one of them. Our package complements the excellent CleanRL library, aiming to raise research standards specifically in the goal-conditioned RL (GCRL) field. Currently, there is no single GCRL benchmark that researchers and practitioners can readily use to test new ideas. Training in previous studies is typically slow [1], the number of tasks is often limited [2], and the tasks tend to be too easy [3].\\n\\nAs evidence of the impact of the work, collaborators from both industry and academia are eager to use the codebase for their own research efforts. Since the initial submission, we have added several new features (e.g., new environments, refactoring, etc) to help onboard new users. We hope that this also helps underscore our commitment to long-term project maintenance. \\n\\n> There is no documentation, it is unclear:\\n> - Which approaches are implemented\\n> - How to use these approaches\\n> - How to add new models\\n> - The structure of the codebase\\n\\nWe thank the reviewer for this suggestion. We have added detailed information about the codebase structure, implemented approaches and environments to the README.md file ([see anonymous code](https://anonymous.4open.science/r/JaxGCRL-2316/README.md)). Additionally, we started [MkDocs documentation](https://anonymous.4open.science/r/JaxGCRL-2316/docs/index.md), which we will host on the project website to make it easier to get started with JaxGCRL.\\n\\n> - There are no unit tests, so the correctness of the code (and the ability to maintain the code as time goes on) is unclear\\n> - The train script is solely for the authors, relying on a pre-existing conda environment\\n> - There are no tutorials beyond a single bash command that runs a parameter sweep\\n> - The library relies on wandb, and does not seem to run without a wandb account\\n\\nWe started implementing these valuable suggestions, starting with different experiment examples and a flag for *optional* logging to wandb.\\n\\n> - As far as I understand, the authors only implement 3 algorithms, and I would like to see more than three baselines so that we can do proper comparisons\\n\\nWe've added 3 baselines: TD3, TD3+HER and PPO to the benchmark figure ([Figure 3](https://anonymous.4open.science/r/paperJaxGCRL-EFEB/Accelerating_Goal_Conditioned_RL_Algorithms_and_Research_rebuttal.pdf)).\\n\\n\\n[1] https://github.com/google-research/google-research/tree/master/contrastive_rl\\n\\n[2] https://github.com/dingyiming0427/goalgail \\n\\n[3] https://github.com/martius-lab/HiTS\"}", "{\"title\": \"Thank you for the response\", \"comment\": \"I'd like to thank the authors for engaging with my concerns. The added environments are clearly difficult even for the largest network presented and I will therefore raise my score as promised.\\n\\nThank you for clarifying the point around stabilisation around the goal and answering the rest of my questions.\"}", "{\"title\": \"General Response\", \"comment\": [\"We thank the reviewers for their time reviewing our manuscript. The insights provided by the reviewers have allowed us to increase further the quality and readability of our manuscript and JaxGCRL. So far, we have made the changes listed below in response to reviewers' questions and suggestions.\", \"**Improving the JaxGCRL user experience with proper documentation.** We would like to thank reviewer MvLy for their helpful suggestions on the essential components of a modern RL library. In the coming days, we will share updates on the changes to JaxGCRL to enhance its usability, starting with a documentation (MkDocs Material) update today:\", \"We added a list of implemented environments.\", \"We added a concise example of using different environments and methods.\", \"We described the structure of the codebase.\", \"We added information on how to run experiments without Weights&Biases by setting a run flag\"], \"reviewers_can_see_these_new_changes_on_the_anonymised_repo\": [\"https://anonymous.4open.science/r/JaxGCRL-2316/README.md.\", \"**Readability of our manuscript** - following comments from all the reviewers, we have made several changes to our manuscript, which we believe further improve its clarity:\", \"In Section 2.2 (Accelerating Deep Reinforcement Learning), we clarified JAX's importance for GPU-accelerated environments.\", \"In Section 5.1 (Experimental Setup), we added the definition of the goal proximity criterion and information about Brax implementation dependency.\", \"In Section 5.2 (JaxGCRL Benchmark Results), we added new baselines: TD3, TD3+HER and PPO.\", \"In Appendix (A.6), we have included additional results with scaled-up CRL in a data-rich setting (see below: Extended results on 300M steps for big architecture).\", \"We also corrected all the typos and missing citations pointed out by reviewers.\"], \"these_new_changes_are_visible_in_the_updated_manuscript\": \"https://anonymous.4open.science/r/paperJaxGCRL-EFEB/Accelerating_Goal_Conditioned_RL_Algorithms_and_Research_rebuttal.pdf. We have highlighted the changes with orange colour, both text and modified figures.\\n\\n**Extended results on 300M steps for big architecture** - Reviewer 3DLK pointed out that we haven\\u2019t evaluated the highest performing architecture (network with 1024 layer width and depth of 4 and layer norm) on every environment while training for 300M steps as in Section 5.5.\\n\\nWe find that these changes can increase the fraction of trials where the agent reaches the goal at least once, but they do not enable the agent to stabilise around the goal (e.g., on 5 tasks, the best agent spends less than 50% of an episode at the goal). When visualizing the rollouts, we observe that the Humanoid agent falls immediately after reaching the goal state, and the Ant Soccer agent struggles to recover when it pushes the ball too far away. The key takeaway is that there is still room for substantial improvement (e.g., can any RL method spend 80% of time steps at goal on Humanoid task), underscoring one dimension on which these tasks are challenging.\", \"links_to_figures_from_this_experiment\": \"- [Time near goal](https://anonymous.4open.science/r/paperJaxGCRL-EFEB/time_near_goal.png)\\n- [Success rate](https://anonymous.4open.science/r/paperJaxGCRL-EFEB/success_rate.png)\\n\\nWe have also added a new task, Ant Hard Maze. No method reaches the goal in more than 50% of episodes on this new task. We also want to highlight that modifying proposed environments to make them more difficult is straightforward and needs just modifying a single method (`random_target`), which defines the distribution from which goals are sampled.\\n\\nWe believe that these changes increase the quality of JaxGCRL and our manuscript, and again, we are grateful to the reviewers for their suggestions.\"}", "{\"summary\": \"The authors propose a new library for goal conditioned reinforcement learning (GCRL). Unlike prior work, their method runs end to end on the GPU, making training faster. They implement 8 environments in JAX, as well a few algorithms and various objectives. Then, they evaluate existing methods across a number of axes, investigating replay ratios, model sizes, and energy functions.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"1\", \"strengths\": [\"The library is well-motivated, as speeding up RL leads to better experimentation\", \"The authors implement many environments and energy functions\", \"The scale, energy function, update-to-data ratio experiments are interesting and useful for future work on GCRL\"], \"weaknesses\": [\"The library appears like a \\\"one-and-done\\\" sort of thing that will not be maintained after publication. In RL there is already a large graveyard of abandoned RL projects that no longer run and provide no value to the community. Given this fact, I can only review the current state of the library. In its current state, I think the library needs a bit more work before publication. Please see https://docs.cleanrl.dev for an example of what I think a modern RL library should look like.\", \"There is no documentation, it is unclear:\", \"Which approaches are implemented\", \"How to use these approaches\", \"How to add new models\", \"The structure of the codebase\", \"There are no unit tests, so the correctness of the code (and the ability to maintain the code as time goes on) is unclear\", \"The train script is solely for the authors, relying on a pre-existing conda environment\", \"There are no tutorials beyond a single bash command that runs a parameter sweep\", \"The library relies on wandb, and does not seem to run without a wandb account\", \"As far as I understand, the authors only implement 3 algorithms, and I would like to see more than three baselines so that we can do proper comparisons\"], \"questions\": [\"Figure 4 typo: \\\"though DPO policies remain at the goal for a shorter\\\"\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Spotlight)\"}", "{\"comment\": \"Thank author for addressing my comments and adding more experiments.\\n\\nOverall the paper is in good shape, I would recommend acceptance.\", \"nit\": \"ill-formatted citation at line 33\"}" ] }
4g0PUEAHg0
Transformers Learn Bayesian Networks Autoregressively In-Context
[ "Yuan Cao", "Yihan He", "Dennis Wu", "Hong-Yu Chen", "Jianqing Fan", "Han Liu" ]
Transformers have achieved tremendous successes in various fields, notably excelling in tasks involving sequential data like natural language processing. Despite their achievements, there is limited understanding of the theoretical capabilities of transformers. In this paper, we theoretically investigate the capability of transformers to autoregressively learn Bayesian networks in-context. Specifically, we consider a setting where a set of independent samples generated from a Bayesian network are observed and form a context. We show that, there exists a simple transformer model that can (i) estimate the conditional probabilities of the Bayesian network according to the context, and (ii) autoregressively generate a new sample according to the Bayesian network with estimated conditional probabilities. We further demonstrate in extensive experiments that such a transformer does not only exist in theory, but can also be effectively obtained through training. Our analysis showcases the potential of transformers to effectively learn complicated probabilistic models, and contributes to a better understanding of the success of large language models.
[ "tansformer", "Bayesian network", "in-context learning" ]
https://openreview.net/pdf?id=4g0PUEAHg0
https://openreview.net/forum?id=4g0PUEAHg0
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yt119EvhMR", "sHgAxQd379", "Zq7aWz46lF", "WtFPIuktUV", "KMAqJytEL0", "8XbgP2qVE3", "6gDTvHMze6", "5W4Ti3aG15", "1lLC0SpmgM", "1PiHLhYGkq" ], "note_type": [ "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "comment", "official_comment", "official_review", "official_review" ], "note_created": [ 1733313677040, 1730696743288, 1733312978018, 1730597391362, 1733311119943, 1733310389060, 1733320822584, 1733311666394, 1730661081013, 1730586671951 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission13791/Authors" ], [ "ICLR.cc/2025/Conference/Submission13791/Reviewer_WQ4Q" ], [ "ICLR.cc/2025/Conference/Submission13791/Authors" ], [ "ICLR.cc/2025/Conference/Submission13791/Reviewer_j98B" ], [ "ICLR.cc/2025/Conference/Submission13791/Authors" ], [ "ICLR.cc/2025/Conference/Submission13791/Authors" ], [ "ICLR.cc/2025/Conference/Submission13791/Authors" ], [ "ICLR.cc/2025/Conference/Submission13791/Authors" ], [ "ICLR.cc/2025/Conference/Submission13791/Reviewer_6Gv6" ], [ "ICLR.cc/2025/Conference/Submission13791/Reviewer_WKSB" ] ], "structured_content_str": [ "{\"comment\": \"Thank you very much for your helpful comments. We address your questions as follows.\\n\\n\\n> *Q1.* I find the notation quite confusing and the way the paper is organised makes it a bit hard to follow.\\n\\n*A1.* Thanks for pointing it out. We believe that your confusions are caused by several typos and unexplained notations. We will fix them in the revision.\\n\\n>*Q2.* As far as I understand, the paper focuses on only three different Bayesian networks with a fixed structure (shown in Figure. 1).\\n\\n*A2.* We believe this is a misunderstanding. Our theory holds for arbitrary architectures of Bayesian networks. Our experiments consider example structures illustrated in Figure. 1 as we believe they are representative graph structures. We will add real data experiments in the revision.\\n\\n>*Q3.* On a related note, why only binary variables are considered? It would be interesting to extend the analysis to variables taking values from a vocabulary of a certain size.\\n\\n*A3.* We believe this is also a misunderstanding. Our theory applies to any discrete random variables and our theorem results are established for the case where each random variable can take $d$ different values. We have also added experiments in our revised paper on multi-category variables (Figures 13 and 14). \\n\\n>*Q4.* In section 5.1 (model paragraph), the dimensions of p and p_q changes compared to Eq. 3.1 where they were defined. Could the authors please clarify?\\n\\n*A4.* Thanks for pointing it out. We will thoroughly revise the paper and ensure consistent notations.\\n\\n>*Q5.* From the experiments in section 5.3 a one layer transformer seems to be enough. This result contrasts with the theoretical construction which in principle would require a 2-layer model. Could the authors better elaborate on this point?\\n\\n*A5.* We believe there is no contradiction between our theory and experiments. Our theory shows that there exists a simple transformer model with good performance, providing practical guidance that a well-trained transformer should at least perform similarly as the one we theoretically construct. Our theory does not deny the possibilty that there exists other transformer models that can perform equally well on the same task.\\n\\n>*Q6.* Typos.\\n\\n*A6.* Thank you for pointing out the typos. We will fix them in the revision.\"}", "{\"summary\": \"This paper theoretically constructs a simple transformer model that learns to sample from a Bayesian Network (BN) from in-context samples. A BN is an ordered set of variables that has a causal ordering among their variables and satisfy various conditional probabilities. The paper shows that for a BN of bounded maximum indegree, a simple 2-layer transformer can approximate the conditional probabilities and generate new samples. The proof is simple and basically constructs positional embeddings that mimic the parent structure of the BN and then applies MLE. Experiments are conducted to validate the theory on simulated BNs and also probe the number of layers needed. The target audience are people interested in theoretical machine learning.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"There has been a lot of growing interest in theoretically studying whether transformers can learn certain structures [1, 2, 3]. The problem this work studies, whether transformers learn bayesian networks, is very interesting and relevant to the ICLR community.\", \"The general problem of learning a BN is very tricky (even with transformers) and the work simplifies it nicely using a curriculum approach so only a few variables are introduced at each stage. However, while the idea is novel, this does limit the usefulness of this algorithm (see weaknesses below).\", \"#### References:\", \"[1] Transformers Learn Shortcuts to Automata\", \"[2] Do LLMs dream of elephants (when told not to)? Latent concept association and associative memory in transformers.\", \"[3] (Un)interpretability of Transformers: a case study with bounded Dyck grammars\"], \"weaknesses\": [\"While the result is nice to have, it's unclear whether how the main theorem of this work compares to results from existing works on universal approximation capabilities of transformers.\", \"Moreover, it's also unclear whether gradient descent or standard approximation methods used to learn such models will extract some sort of similar structure. The authors state this in their conclusion, however this is a relevant weakness of this work and limits its utility.\", \"The curriculum setup sounds interesting, however it seems to require apriori knowledge of the causal order and this may not be available in practice.\", \"While experiments on simulated data validate the theory, it would also be nice to have some validation on real-life data (even on few variables).\"], \"questions\": [\"Some questions were raised above.\", \"In the definition of a BN, the causal order seems to respect the index order. Does the main theorem hold when the ordering is not known, i.e. the variables are permuted uniformly in the samples?\", \"#### Typos:\", \"L152: will-known -> well-known\", \"L198: paramter -> parameter\", \"L294: missing citation for visualization\", \"L667: nad -> and\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you very much for your helpful comments. We give our detailed reponses to your questions below.\\n\\n\\n>*Q1.* The paper does not provide evidence on whether the trained transformer implements the algorithm proposed in Theorem 4.1 and Section 6. Other previous works on similar topics utilizes attention pattern analysis and causal studies through ablations.\\n\\n*A1.* Thank you for your question and suggestion. Please note that as a study focusing on the expressive power of transformers, the goal of our paper is to demonstrate that there exists a simple transformer capable of handling our task of interest. To our knowledge, most results of this type do not necessarily ensure that the actual model obtained through training is exactly the same as the one constructed in theory.\\n\\nOur theory shows that there exists a simple transformer model with good performance, providing practical guidance that a well-trained transformer should at least perform similarly as the one we theoretically construct. Therefore, even if there is no exact match between the trained model and the theoretical construction, it does not diminish the practical value of our theory.\\n\\n\\n\\n>*Q2.* The paper lacks explanations of terms like \\u201cnaive bayes\\u201d and \\u201cbayesian inference\\u201d and does not clarify how the accuracy of these algorithms is calculated in the accuracy plots in Figures 2, 4, and 6.\\n\\n*A2.* Please refer to *A4* in our response to all reviewers.\\n\\n\\n>*Q3.* The paper does not address the robustness of the results under more realistic settings, such as with positional embeddings.\\n\\n*A3.* Please note that, as a paper focusing on the expressive power of transformers, extensions of our result to settings with practical positional embeddings is trivial \\u2013 we can easily slightly modify a bias term to subtract the unused positional embeddings. Positional embeddings do not significantly affect the experiment results either.\"}", "{\"summary\": \"The main goal of this paper is to demonstrate how transformers can learn Bayesian networks in context. The paper provides a theoretical proof showing that transformers are capable of modeling Bayesian networks in context. Additionally, The paper provides an evidence that transformers trained on such tasks can make Bayes-optimal predictions for new samples in an autoregressive manner.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The paper proposes a theoretical construction that transformers are capable to capture Bayesian networks in context.\\n2. The paper presents a well-defined experimental framework to explore how transformers learn Bayesian networks in context, which could inspire further research.\\n3. The paper compares prediction accuracy by varying the variables and types of Bayesian networks, providing a detailed description of qualitative differences among various instances.\", \"weaknesses\": \"1. The paper does not provide evidence on whether the trained transformer implements the algorithm proposed in Theorem 4.1 and Section 6. Other previous works on similar topics utilizes attention pattern analysis and causal studies through ablations.\\n2. The paper lacks explanations of terms like \\u201cnaive bayes\\u201d and \\u201cbayesian inference\\u201d and does not clarify how the accuracy of these algorithms is calculated in the accuracy plots in Figures 2, 4, and 6.\\n3. The paper does not address the robustness of the results under more realistic settings, such as with positional embeddings.\", \"questions\": \"1. Could you specify \\u201cnaive bayes\\u201d and \\u201cbayesian inference\\u201d in main text?\\n2. Could you provide whether the trained transformers implement an algorithm proposed at Theorem 4.1 and Section 6?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you very much for your helpful comments. We address your questions as follows.\\n\\n> *Q1.* While the result is nice to have, it's unclear how the main theorem of this work compares to results from existing works on universal approximation capabilities of transformers.\\n\\n*A1.* Thanks for your question. Please note that our paper considers an in-context learning task which is rather complicated. You are correct that existing results on universal approximation capabilities of transformers cannot imply any concrete results in our setting, and this is the strength of our results. \\n\\n\\n> *Q2.* Moreover, it's also unclear whether gradient descent or standard approximation methods used to learn such models will extract some sort of similar structure. The authors state this in their conclusion, however this is a relevant weakness of this work and limits its utility.\\n\\n*A2.* While our theory does not demonstrate whether standard training algorithms can indeed give a transformer model that can accomplish our desired tasks, we demonstrate this through experiments. We also propose a particular training method based on curriculum, which improves the utility of our results.\\n\\n\\n> *Q3.* The curriculum setup sounds interesting, however it seems to require a priori knowledge of the causal order and this may not be available in practice. In the definition of a BN, the causal order seems to respect the index order. Does the main theorem hold when the ordering is not known, i.e. the variables are permuted uniformly in the samples?\\n\\n\\n*A3.* Please refer to *A1.* in our response to all reviewers. We believe such a setting is natural given that we are considering an autoregressive task. The order we consider does not have to be the \\u2018causal\\u2019 order. It can be any order of variables (and there always exist a Baysian network following this order that can describe the joint distribution of the random variables). \\n\\n\\n> *Q4.* While experiments on simulated data validate the theory, it would also be nice to have some validation on real-life data (even on few variables).\\n\\n*A4.* Thank you for your suggestion. We will add real data experiments in our revised paper.\\n\\n> *Q5.* Typos.\\n\\n*A5.* Thanks for pointing out the typos. We will fix them in the revision.\"}", "{\"comment\": \"Dear Reviewers,\\n\\nThank you very much for your constructive and helpful comments. We realize that many of your concerns are caused by unclear terminologies and insufficient background explanations, and that our work can benefit from a thorough revision. Therefore, we decide to withdraw our submission. We will carefully revise the paper based on your valuable feedback before submitting it to future venues. We would like to respond to your major concerns as follows.\\n\\n> *Q1.* Our setting requires a priori knowledge of the causal order of variables.\\n\\n*A1.* Our paper aims to demonstrate that transformers can *autoregressively* generate new samples according to an estimated Bayesian network. Having a pre-determined order of the variables is natural and is consistent with the practice, since we consider autoregressive generation.\\n\\nPlease also note that without a pre-determined order of variables, Bayesian networks are not identifiable because multiple Bayesian networks can equivalently describe the same joint distribution of variables. Specifically, it can be shown that for any given order of variables, there exists a Bayesian network that describes the same joint distribution, satisfying that each variable can only be a descendant of the preceding variables according to that order.\\n\\n> *Q2.* This is not a paper about learning Bayesian networks in-context; rather, it is a paper about whether Transformers can estimate conditional probabilities of discrete variables in-context.\\n\\n*A2.* Thanks for pointing this out. We will clarify in our revision that our goal is to show that there exists a simple transformer model that can (i) estimate the conditional probabilities of the Bayesian network according to the context, and (ii) autoregressively generate a new sample according to the Bayesian network with estimated conditional probabilities. We have made clarifications in the abstract, and we will consider changing the title of the paper to further avoid confusion. \\n\\nPlease note that similar settings of estimating the conditional probabilities of the Bayesian network according to the context is standard and has been considered in recent work [1].\\n\\n> *Q3.* What is the precise objective by which the Transformer is trained?\\n\\n*A3.* Our theory focuses on studying the expressive power of transformers. The nature of this kind of research is that it does not necessarily focus on any particular training objective. Instead, we just aim to show that there exists such a transformer model that can accomplish the desired task. Similar studies of the expressive power of transformers are also considered in previous works such as [2].\\n\\nIn our experiments, we train the transformers to minimize the cross-entropy loss of predicting the masked-out variables in queries. We will add clarifications to this together with a more detailed explanation of the curriculum. \\n\\n\\n> *Q4.* Explanations of the 'naive Bayes' and 'Bayesian inference' methods we used for performance comparison with transformers in our experiments.\\n\\n*A4.* We have realized that these are confusing terminologies, and we will replace them with clearer names in the revision.\\n\\nSuppose that there are $M$ variables $X_1,\\\\ldots,X_M$, and that we have $N$ independent groups of observations $(X\\\\_{11},\\\\ldots,X\\\\_{M1}),\\\\ldots,(X\\\\_{1N},\\\\ldots,X\\\\_{MN})$. Further suppose that we have query observations $X\\\\_{1q},\\\\ldots,X\\\\_{(m_0-1)q}$, and our goal is to estimate the conditional probabilities of the form:\\n\\n$P( X\\\\_{m_0} = j | X\\\\_{1} = X\\\\_{1q},\\\\ldots, X\\\\_{m_0-1} = X\\\\_{(m_0-1)q} )\\\\quad\\\\quad (Eq1) $.\\n\\nDenote by $\\\\mathcal{P}(m_0) $ the parent set of the $m_0$-th variable according to the Bayesian network. In our current manuscript, we mean by 'naive Bayes' the method that estimates the conditional probability above in (Eq1) as \\n\\n$ \\\\frac{ | \\\\{ i\\\\in [N]: X\\\\_{m_0i} = j, \\\\text{ and } X\\\\_{mi} = X\\\\_{mq} \\\\text{ for all } m = 1,\\\\ldots,m_0-1 \\\\} | }{ | \\\\{ i\\\\in [N]: X\\\\_{mi} = X\\\\_{mq} \\\\text{ for all } m = 1,\\\\ldots,m_0-1 \\\\} | }$.\\n\\nWe mean by \\u2018Bayesian inference' the method that estimates the conditional probability above in (Eq1) as \\n\\n$ \\\\frac{ | \\\\{ i\\\\in [N]: X\\\\_{m_0i} = j, \\\\text{ and } X\\\\_{mi} = X\\\\_{mq} \\\\text{ for all } m\\\\in \\\\mathcal{P}(m_0) \\\\} | }{ | \\\\{ i\\\\in [N]: X\\\\_{mi} = X\\\\_{mq} \\\\text{ for all } m\\\\in \\\\mathcal{P}(m_0) \\\\} | } $.\\n\\nClearly, 'Bayesian inference' can utilize more observations to calculate frequencies and is therefore more efficient. According to our theory, the performance of transformers should be comparable to 'Bayesian inference' and better than 'naive Bayes', particularly when the ground truth Bayesian network is complex.\\n\\nWe hope that our response above can address most of your concerns. Thank you again for your valuable feedback. \\n\\nBest regards,\\n\\nAuthors\\n\\n[1] Eshaan Nichani, et al. \\\"How Transformers Learn Causal Structure with Gradient Descent.\\\" ICML 2024.\\n\\n[2] Yu Bai, et al. \\\"Transformers as statisticians: Provable in-context learning with in-context algorithm selection.\\\" NeurIPS 2023.\", \"title\": \"Response to All Reviewers\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\", \"comment\": \"We thank the ACs for handling our paper and thank all the reviewers for the constructive and helpful comments. We realize that many of the reviewers' concerns are caused by unclear terminologies and insufficient background explanations, and that our work can benefit from a thorough revision. Therefore, we decide to withdraw our submission. We will carefully revise the paper based on the valuable feedback we receive, and submit the work to a future venue.\"}", "{\"comment\": \"Thank you very much for your detailed comments and constructive suggestions. We address yoru questions as follows.\\n\\n>*Q1.* A key flaw with this paper is a misrepresentation of the problem: this is not a paper about learning Bayesian networks in-context; rather, it is a paper about whether Transformers can estimate conditional probabilities of discrete variables in-context. \\n\\n*A1.* Please refer to *A2* in our response to all reviewers. We will clarify in our revision that our goal is to show that there exists a simple transformer model that can (i) estimate the conditional probabilities of the Bayesian network according to the context, and (ii) autoregressively generate a new sample according to the Bayesian network with estimated conditional probabilities. We have made clarifications in the abstract, and we will consider changing the title of the paper to further avoid confusion. \\n\\n\\n>*Q2.* Following up on the previous point, the paper can be interpreted as asking: can Transformers estimate joint distributions of discrete variables in-context? The technical result is in service of showing that true conditional probabilities can be captured by the hypothesis class of two-layer Transformers. But, the significance of this finding lacks context: what is the broader implication if Transformers can estimate multivariate discrete distributions in-context? What questions will this help us answer in the broader context of machine learning? The authors need to properly contextualize the questions and finding in their paper.\\n\\n*A2.* Thank you very much for your suggestions. We do agree that our paper needs a better explanation of the background and we will focus on it in our revision.\\n\\n>*Q3.* The technical setup lacks clarity about details that are essential to a paper about Transformers and in-context learning: what is the precise objective by which the Transformer is trained? Is it a causal decoding Transformer trained to minimize the negative log likelihood of the next categorical variable given the previous ones in a particular sample? \\n\\n*A3.* Please refer to *A3* in our response to all reviewers.\\n\\n>*Q4.* There are technical details that do not appear to be correct. For example, in Eqn. 3.2 \\u2026\\n\\n*A4.* Thanks for pointing it out. We believe your confusions are caused by several typos and unexplained notations. We will fix them in the revision.\\n\\n>*Q5* The empirical studies also lack clarity about key details. For example, in lines 261 and 262,...\\n\\n*A5.* We will thoroughly revise the paper to improve the presentation. Regarding 'naive Bayes' and 'Bayesian inference', please refer to *A4* in our response to all reviewers.\"}", "{\"summary\": \"This paper considers the ability of Transformers to estimate in-context the conditional probabilities of categorical variables. Theoretically, the paper seeks to prove that for any joint distribution over categorical variables and an ordering over them, there exists a two-layer Transformer that can represent the true conditional probabilities of each variable given those that are earlier in the ordering. Empirically, the paper considers experiments on synthetic data where Transformers are trained on samples from different Bayesian networks that all come from some family of graphs. The paper compares the probabilities estimated in-context to the ground truth as well as those estimated via naive Bayes and Bayesian inference, finding trends that suggest that Transformers have the capacity to estimate conditional probabilities in-context.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"This paper investigates whether Transformers are capable of estimating multivariate discrete distributions in-context. In and of itself, this research question has not been studied yet, to the best of my knowledge.\", \"weaknesses\": [\"A key flaw with this paper is a misrepresentation of the problem: this is not a paper about learning Bayesian networks in-context; rather, it is a paper about whether Transformers can estimate conditional probabilities of discrete variables in-context. In lines 106-107, where the problem is introduced, note that the Bayesian network that is specified is not the true Bayesian network (BN) that defines the joint distribution of the variables: it is simply a factorization of the distribution via chain rule given a particular variable ordering. This factorization is generic and valid for any distribution. By contrast, the __true__ BN that underlies a distribution can entail far more conditional independences than is given by the chain rule. Even if this paper was about learning BNs, BNs are anyways not identified by observed data: it is known theory that multiple BNs entail the exact same set of conditional independences.\", \"Following up on the previous point, the paper can be interpreted as asking: can Transformers estimate joint distributions of discrete variables in-context? The technical result is in service of showing that true conditional probabilities can be captured by the hypothesis class of two-layer Transformers. But, the significance of this finding lacks context: what is the broader implication if Transformers can estimate multivariate discrete distributions in-context? What questions will this help us answer in the broader context of machine learning? The authors need to properly contextualize the questions and finding in their paper.\", \"The technical setup lacks clarity about details that are essential to a paper about Transformers and in-context learning: what is the precise objective by which the Transformer is trained? Is it a causal decoding Transformer trained to minimize the negative log likelihood of the next categorical variable given the previous ones in a particular sample? Details about how the Transformer is trained are completely missing. Further, for completeness, the authors should also properly define every piece of notation like $0_{dm}$ and $\\\\mathbf{e}_{N+1}$ -- I imagine these define a matrix of 0s and the $N+1$-th standard basis vector, respectively? But readers shouldn't have to interpret key pieces of notation.\", \"There are technical details that do not appear to be correct. For example, in Eqn. 3.2 that defines the input matrix $\\\\mathbf{X}$ to the Transformer, the dimensions do not make sense: each $\\\\mathbf{x}_{ij}$ entry is a $d$-dimensional one-hot encoding, as stated in in line 117, but the vector $p$ is $(M+1)d$-dimensional according to Eqn. 3.1. Thus, the last row of the input $\\\\mathbf{X}$ seems to have more columns than the rows above. Another example is in line 190: to specify the output of the model, the authors indicate $\\\\mathbb{R}^d$ and define operations that would produce a $d$-dimensional real-valued vector, but for categorical variables, we need to output vectors in the $d$-dimensional simplex. The composition of the $\\\\mathrm{Read}(\\\\cdot)$ and $\\\\mathrm{Linear}(\\\\cdot)$ functions would not produce vectors that are probabilities that sum to 1, as needed for evaluating the log likelihood or for sampling discrete variables.\", \"The empirical studies also lack clarity about key details. For example, in lines 261 and 262, the phrase \\\"the probability distribution of those graphs ...\\\" is not parseable. What is this referring to? Second, the methods that are compared with a Transformer -- naive Bayes and Bayesian inference -- are significantly lacking in clarity. How is naive Bayes being applied to the density estimation problem considered in this paper? Bayesian inference is not a model, it is a method, so what is the underlying model on which Bayesian inference is applied and what is the posterior being inferred? These details are not clear from the paper and limit the ability of a reader to make sense of the empirical findings.\"], \"questions\": [\"Can you please clarify the problem formulation in this paper? I don't think it's accurate to say that this paper is about Bayesian network learning. However, I'd like the authors to reflect on this aspect, and clarify this point.\", \"Can you elaborate significantly on how the Transformer is trained, including details about: is it a causal decoding Transformer? Is it trained to minimize the negative log-likelihood of the next variable given previous ones? Include all details that can help a reader clearly understand the training objective.\", \"Can you shed light on the dimensions of the input $X$ and in particular, clarify the apparent mismatch in dimensions of the last row against the previous rows?\", \"Can you clarify how the output of the final linear layer is transformed to produce a proper vector of probabilities for a categorical distribution?\", \"Can you clarify the missing details about the empirical studies that I noted in the \\\"weaknesses\\\" section?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"1\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper studies the problem of in-context learning in transformers. In particular, it focuses on whether transformers are able to learn Bayesian networks in-context. In this setting, the model, given N different realisations of a specific graph and a query sample, is tasked to predict the probability distribution associated with a missing variable (see construction in Eq. 3.2). The assumption is that, if the model is able to infer the conditional probabilities associated with the Bayesian network, it can then use them to predict the value of the missing variable. In addition, once the model has captured such conditional probabilities, it is in principle able to generate new samples from the inferred graphical model (Algorithm 1).\\n\\nThe authors first provide a theoretical construction for a two-layer transformer which is capable of estimating the conditional probabilities of the Bayesian network according to the context, and autoregressively generating a new sample according to the Bayesian network with estimated conditional probabilities (Theorem 4.1 and Lemma 6.1 and Lemma 6.2). \\n\\nThe authors also conduct an empirical analysis to show the performance of trained transformers (with up to 6 layers) on the task of in-context learning three graph structures, namely a \\\"general graph\\\", a \\\"tree\\\" and a \\\"chain\\\". The performance of the model is studied by varying the number of in-context examples seen at training time and evaluating the model on different number of test in-context samples. The results show some evidence that transformers are capable of learning Bayesian networks in-context.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The paper follows a relevant and fruitful line of work studying in-context-learning (ICL) on controlled settings.\", \"The paper proposes the interesting benchmark of Bayesian networks to study ICL capabilities of transformers.\", \"The paper provides a theoretical construction of a simple 2-layer transformers capable of estimating conditional probabilities of Bayesian networks and of generating a new sample autoregressively from the inferred graphical model.\"], \"weaknesses\": [\"I find the notation quite confusing and the way the paper is organised makes it a bit hard to follow. For example, looking at Algorithm 3.1, it seems that the input to the model is of size (2M+1)d x (N+1) where the N+1 factor takes the query into account, while it seems the Read Function takes as input a tensor of size (2M+1)d x (N). In addition, I found a bit hard to follow the description of how the training and test datasets are generated (paragraphs Datasets and Metrics in Section 5.1). Could the authors clarify these points?\", \"As far as I understand, the paper focuses on only three different Bayesian networks with a fixed structure (shown in Figure. 1). If my understanding is correct, I believe more varied and diverse graph structures should be considered to better support the author's thesis. Can the analysis be extended to other graphs?\", \"On a related note, why only binary variables are considered? It would be interesting to extend the analysis to variables taking values from a vocabulary of a certain size.\", \"In section 5.1 (model paragraph), the dimensions of p and p_q changes compared to Eq. 3.1 where they were defined. Could the authors please clarify?\", \"From the experiments in section 5.3 a one layer transformer seems to be enough. This result contrasts with the theoretical construction which in principle would require a 2-layer model. Could the authors better elaborate on this point?\", \"Several typos across the manuscript. See, for example, missing link in the \\\"Curriculum Design\\\" paragraph (\\\"A visualization of the curriculum is in XXX\\\")\"], \"questions\": \"See weaknesses part.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}" ] }
4fyg68nmd7
Scaling Laws for Task-Optimized Models of the Primate Visual Ventral Stream
[ "Abdulkadir Gokce", "Martin Schrimpf" ]
When trained on large-scale object classification datasets, certain artificial neural network models begin to approximate core object recognition (COR) behaviors and neural response patterns in the primate visual ventral stream (VVS). While recent machine learning advances suggest that scaling model size, dataset size, and compute resources improve task performance, the impact of scaling on brain alignment remains unclear. In this study, we explore scaling laws for modeling the primate VVS by systematically evaluating over 600 models trained under controlled conditions on benchmarks spanning V1, V2, V4, IT and COR behaviors. We observe that while behavioral alignment continues to scale with larger models, neural alignment saturates. This observation remains true across model architectures and training datasets, even though models with stronger inductive bias and datasets with higher-quality images are more compute-efficient. Increased scaling is especially beneficial for higher-level visual areas, where small models trained on few samples exhibit only poor alignment. Finally, we develop a scaling recipe, indicating that a greater proportion of compute should be allocated to data samples over model size. Our results suggest that while scaling alone might suffice for alignment with human core object recognition behavior, it will not yield improved models of the brain's visual ventral stream with current architectures and datasets, highlighting the need for novel strategies in building brain-like models.
[ "scaling laws", "neural alignment", "behavioral alignment", "computer vision", "primate visual ventral stream" ]
Reject
https://openreview.net/pdf?id=4fyg68nmd7
https://openreview.net/forum?id=4fyg68nmd7
ICLR.cc/2025/Conference
2025
{ "note_id": [ "tb6lyHTNAd", "tPGNfyesR2", "oyWyXqizkZ", "mBdVvE3Ci8", "lT8EXSIsUP", "fWMTD9umBR", "YXVZ4e4N3z", "WUDa08GYgA", "Vsqc0JB9Rk", "N4A4XwgrF8", "MpwT5MY0xa", "Lmfh5e7sVe", "L4NcnuXHRg", "HKq72bwVGC", "Gj70cmPCmm", "D7WzR8VyDj", "CwRt2fOYS3", "9TymV1OpRM", "6m7XbgNSzd", "6kdTe6zemY", "57YVy2e5r2", "2hc94wNRmF", "07AKpcTEvA" ], "note_type": [ "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "meta_review", "decision", "official_review", "official_comment" ], "note_created": [ 1730689289283, 1732722150419, 1733159492444, 1732722589240, 1732721425334, 1733158220397, 1733197655218, 1732722756901, 1732722310144, 1733198699724, 1732721723268, 1732722197357, 1730687588389, 1732721950574, 1733159543812, 1732722629214, 1730045474782, 1732721769893, 1732723161963, 1734854042804, 1737524290164, 1730731946951, 1732722886417 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission13938/Reviewer_gmHr" ], [ "ICLR.cc/2025/Conference/Submission13938/Authors" ], [ "ICLR.cc/2025/Conference/Submission13938/Reviewer_Z99m" ], [ "ICLR.cc/2025/Conference/Submission13938/Authors" ], [ "ICLR.cc/2025/Conference/Submission13938/Authors" ], [ "ICLR.cc/2025/Conference/Submission13938/Reviewer_Z99m" ], [ "ICLR.cc/2025/Conference/Submission13938/Reviewer_gmHr" ], [ "ICLR.cc/2025/Conference/Submission13938/Authors" ], [ "ICLR.cc/2025/Conference/Submission13938/Authors" ], [ "ICLR.cc/2025/Conference/Submission13938/Reviewer_fpoP" ], [ "ICLR.cc/2025/Conference/Submission13938/Authors" ], [ "ICLR.cc/2025/Conference/Submission13938/Authors" ], [ "ICLR.cc/2025/Conference/Submission13938/Reviewer_fpoP" ], [ "ICLR.cc/2025/Conference/Submission13938/Authors" ], [ "ICLR.cc/2025/Conference/Submission13938/Reviewer_Z99m" ], [ "ICLR.cc/2025/Conference/Submission13938/Authors" ], [ "ICLR.cc/2025/Conference/Submission13938/Reviewer_rNYX" ], [ "ICLR.cc/2025/Conference/Submission13938/Authors" ], [ "ICLR.cc/2025/Conference/Submission13938/Authors" ], [ "ICLR.cc/2025/Conference/Submission13938/Area_Chair_J9Bn" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission13938/Reviewer_Z99m" ], [ "ICLR.cc/2025/Conference/Submission13938/Authors" ] ], "structured_content_str": [ "{\"summary\": \"The paper introduces a way of calculating scaling laws for neural and behavioral alignment with respect of training data and parameter size of models. It offers an interesting overview of the current status of models and its performance on these alignment challenges.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"2\", \"strengths\": \"The paper is well written. The introduction offers a good view of the literature and it is easy to follow the procedure they use to make the evaluation. The results are clearly presented and explained. It provides a good overview of the current landscape of models in the context of neural and behavioral alignment.\", \"weaknesses\": \"My main observation about this work is that, while it provides valuable insights and a well-illustrated overview of the current landscape of models and their alignment with neural of behavioral benchmarks, it could benefit from more clarity on how these findings might guide future advancements. The paper mentions previous work with similar findings, as noted in the discussion; however, it would be helpful to understand more concretely how this work can serve as a foundation for the next steps in the field and how scaling laws can truly help scientists develop the next generation of more brain-like models. For instance what kind of hypothesis can be drawn from scaling laws that can be tested by adding or removing samples/compute of models being constructed to be more brain-like?\\n\\nAlthough the limitations section mentions that \\u2018these functions may not generalize beyond the scales tested,\\u2019 this suggests a natural boundary for the impact of these results. Could the authors estimate, based on their scaling laws, what order of magnitude increase in dataset or parameter size might be needed to significantly improve neural alignment beyond the observed plateau?\\n\\nWhile I understand that this point is mentioned in the limitations section, I feel it is a significant oversight not to include recurrent models. It is encouraging that the paper mentions that inductive bias in the form of convolution seems to yield faster returns, but this feels limited, given that most of the models tested in these benchmarks are much deeper than what might be expected for an architecture resembling the visual cortex. For instance, would be interesting to see how the scaling laws would apply to CorNet? Is it the case that the more brain like the easier it is to scape the scaling laws? that would be very impactful for the community. \\n\\n\\nI may have missed it, but did not see mention on self supervised models or robust models and how the scaling laws operate on models trained on these type of frameworks?\", \"questions\": [\"What are the implications of this work, given the limitations already presented in the paper?\", \"What would be the predictions for a model that closely resembles the visual cortex such as CorNET ?\", \"Given that the paper focuses on scaling, Have the authors considered how their scaling laws might apply to or change for models pre-trained on much larger datasets like LAION before fine-tuning on ImageNet? This could provide insights into whether the observed plateaus persist across different pre-training regimes\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to questions 1\", \"comment\": \"Thank you for your insightful and constructive review of our manuscript. We are pleased that you found our work well-written and that you recognize the novelty in demonstrating varied scaling effects across different cortical areas. Below, we address each of your comments and questions in detail.\\n\\n**1\\\\. What are the implications of this work, given the limitations already presented in the paper?**\\n\\nGiven the limitations presented in the paper\\u2014such as the specific range of model sizes and dataset volumes examined, the subset of architectures evaluated, and the datasets used\\u2014the key implications are as follows:\\n\\n1. **Scaling Alone Is Insufficient for Neural Alignment**: The study reveals that while scaling up model parameters and training data consistently enhances behavioral alignment with human performance, it leads to saturation in neural alignment with the primate visual ventral stream (Figs 1b, 5, 7a). This indicates that simply increasing scale using current architectures and datasets is not enough to achieve better neural alignment (Fig 9). The implication is that alternative approaches are necessary to develop models that more accurately mimic neural representations in the brain. \\n2. **Need for Alternative Modeling Approaches**: The observed saturation in neural alignment suggests that future research should explore new strategies beyond traditional scaling (Fig 2, 7a). This includes integrating biologically inspired architectural features such as feedback mechanisms, leveraging additional data modalities, and developing novel training objectives tailored to better capture the dynamics of neural processing. \\n3. **Importance of Inductive Biases**: The findings highlight the significant role of architectural inductive biases in achieving neural alignment. Models with strong inductive biases, like fully convolutional networks (e.g., ResNets and EfficientNets), demonstrate higher initial neural alignment even before training (Figs 2, 7c, 10, 11). This implies that incorporating architectural priors that reflect biological neural structures can improve alignment efficiency without solely relying on scaling. \\n4. **Guidance for Resource Allocation**: By introducing and fitting parametric power-law scaling laws, the study provides a predictive framework for how alignment scales with compute and data (Fig 4a). This quantitative approach offers practical guidance on how to allocate computational resources effectively between model complexity and dataset size to optimize both neural and behavioral alignment. \\n5. **Potential of Adversarial Training and Alternative Learning Signals**: The experiments with adversarial fine-tuning show its potential in enhancing neural alignment beyond the saturation levels observed with standard training methods (Fig 7b). This suggests that incorporating robust training approaches and alternative learning signals could play a crucial role in developing models that better align with neural data. \\n6. **Differential Impact Across Brain Regions**: The discovery of an ordered effect of scaling on alignment across different brain regions provides deeper insights into how scaling differentially affects various levels of neural processing (Fig 5). This suggests that scaling strategies may need to be tailored to target specific regions within the visual cortex to achieve optimal alignment.\\n\\nIn summary, the implications of this work emphasize that while scaling is beneficial for improving behavioral alignment, it is not sufficient for advancing neural alignment with the brain's visual system using current models and datasets. This underscores the necessity of exploring new modeling approaches that incorporate biological principles and alternative training strategies. Despite the limitations, such as the specific models and datasets used, these findings offer valuable insights and directions for future research aimed at bridging the gap between artificial neural networks and biological neural processing.\"}", "{\"title\": \"Response\", \"comment\": \"Thanks for this in depth response.\", \"i_think_that_the_following_points_are_all_closely_related\": \"**Scaling Alone Is Insufficient for Neural Alignment**, ****Need for Alternative Modeling Approaches**, **Importance of Inductive Biases**. These all are aimed at bringing the stalled progress of NeuroAI into greater focus. I believe this is a great point, but it is no longer novel. I think the current manuscript is overly indexed to the plateau of NeuroAI, and not enough attention is paid to how to escape it.\\n\\n**Guidance for Resource Allocation**, **Potential of Adversarial Training and Alternative Learning Signals**: I agree this could be big. But only if scaling laws are a path forward for NeuroAI. As mentioned I'm excited about the result in Fig. 6 about scaling laws for behavior. This could be the sole focus of the paper if scaling laws indicate we need $X to fully reverse engineer behavior (and $X was not some unrealistic number). I'm not sure that's the case, however. Even in the adversarial training case where the authors show a small boost in alignment, the slope looks extremely flat. If we were to scale up to 10^100 flops, the alignment score would reach 0.51, according to the equation. Again, this is the null result bit that I don't think is novel. We need new ways forward.\\n\\n**Differential Impact Across Brain Regions** Thanks for this response.\"}", "{\"title\": \"Response to weaknesses\", \"comment\": \"1. **Lacking evaluation of what model behaviors give rise to alignment:**\\n\\nWe agree that understanding the specific model behaviors that enhance alignment is crucial for advancing the field. While our current study focused on quantifying scaling laws, we are actively investigating the qualitative factors that contribute to neural and behavioral alignment. One aspect we are exploring is the sensitivity of models to spatial frequencies and other visual features that are relevant to human perception. Additionally, we are analyzing the eigenspectrum of response characteristics of these models to identify patterns that correlate with improved alignment. \\n\\n2. **Evaluation of Recent Multimodal Models:**\\n\\nWe acknowledge the growing importance of multimodal vision-language models in current AI research. We have conducted additional evaluations on publicly available large vision-language models, such as CLIP and DINOv2. These models include much larger architectures and are trained on extensive datasets, including LAION, offering a broader scope of training data and objectives compared to our controlled experiments. Our findings reveal that while these multimodal models achieve enhanced behavioral alignment\\u2014likely due to their diverse training objectives and data sources\\u2014their neural alignment still exhibits a saturation effect similar to that observed in unimodal models. This indicates that, despite the advanced training paradigms and data diversity, multimodal models do not fundamentally address the limitations in neural alignment scaling. We have incorporated these results into the revised appendix, including a detailed visualization of the scaling behavior of pretrained models (Figure 9). When compared to Figure 2c, which illustrates the scaling behavior of models we trained, the similar saturation levels observed between pretrained and trained models reinforce the generalizability of our findings. These results provide additional evidence that while scaling improves behavioral alignment, neural alignment requires alternative approaches to overcome the observed limitations.\"}", "{\"title\": \"General response #1\", \"comment\": \"We sincerely appreciate the time and effort the reviewers have invested in reviewing our manuscript. Their insightful comments and constructive feedback have been invaluable in improving the quality and clarity of our work.\\n\\n**Summary of Key Additions to the Updated Manuscript:**\\n\\n1. **Improving Robustness of Curve Fits via Confidence Intervals:** \\n * To address concerns about the variability and reliability of our curve fits, we now include 95% confidence intervals estimated from 1,000 bootstrapped samples for all scaling curves. This statistical enhancement provides greater confidence in our findings and helps verify the robustness of our scaling laws. \\n * Additionally, we have reworked all figures to use coherent color palettes and improved readability. These visual refinements ensure that the figures are more accessible and easier to interpret, facilitating a clearer understanding of our results. \\n2. **Impact of Inductive Biases on Alignment Dynamics:** \\n * We have elaborated on how inductive biases in neural network architectures influence alignment. Our analysis now includes additional figures (Figures 7c, 10, 11\\\\) demonstrating that models with strong inductive biases, such as fully convolutional networks like ResNets and EfficientNets, exhibit higher initial neural alignment. This insight sheds light on the importance of architectural choices in achieving efficient and effective alignment with neural data. \\n3. **Influence of Different Training Signals:** \\n * We have investigated how different training signals, including self-supervised learning methods like SimCLR and DINO, and adversarial fine-tuning, impact alignment with the brain and behavior. Our findings, presented in Figures 7a, 7b, 7d, 12, and 13, show that these training strategies can enhance alignment, particularly for models with weaker inductive biases. This suggests that rich and diverse learning signals facilitate faster and more effective alignment with neural representations. \\n4. **Evaluation of Pretrained and Multimodal Models:** \\n * We have extended our analyses to include evaluations of larger pretrained models and multimodal models (e.g., CLIP and DINOv2) trained on extensive datasets like LAION. The results, detailed in Figure 9, indicate that they exhibit saturation in alignment confirming our earlier results in Figure2. This reinforces our conclusion that scaling alone is insufficient to overcome the limitations in neural alignment and highlights the need for alternative approaches. \\n5. **Additional Discussion on Future Directions:** \\n * We have added a concise section outlining potential future research avenues. These include exploring adversarial training to push neural alignment beyond current saturation levels, leveraging biologically inspired architectures to develop more compute-efficient models, and investigating co-training with brain data to enhance alignment with neural representations. \\n6. **Clarifications and Corrections:** \\n * We have addressed specific points raised by the reviewers, such as clarifying the alignment saturation values in Figure 1, discussing the impact of scaling on different brain regions (Figure 5), and correcting typographical errors. We have also provided more context on the novelty of our work relative to existing literature and emphasized the practical implications of our findings.\\n\\nWe believe that these additions and revisions have strengthened our manuscript by providing deeper insights into the mechanisms underlying neural and behavioral alignment, and by addressing the key concerns raised in your reviews. We kindly ask you to consider these new analyses and enhancements when evaluating our work.\\n\\nYour thoughtful feedback has been instrumental in refining our study, and we are grateful for your contributions to improving the quality of our research. We hope that the improvements we have made not only address your concerns but also demonstrate the significance and novelty of our contributions to the field.\\n\\nThank you once again for your time and consideration.\"}", "{\"title\": \"Clarification\", \"comment\": \"**Response: Thank you for your positive remark about Figure 6b. We are pleased that you find this aspect of our work engaging. We understand that you suggest focusing on the \\\"scaled-up model of behavior\\\" to strengthen the paper. Could you please clarify what you mean by \\\"scaled-up model of behavior\\\"? Are you proposing that we place greater emphasis on the behavioral alignment achieved at larger scales, or perhaps delve deeper into how scaling impacts behavioral predictions of the models?**\\n\\nI want to know if scale is all you need in order to reverse engineer human behavior on the psychophysics task. That would be truly remarkable. By eye though, it looks like as accuracy on those tasks approaches 100% (is this even a reasonable number to reach?) then behavioral alignment maxes out ~0.7. It would be nice to see if that is indeed the case and if any architectural choices/mechanisms can increase/decrease the likelihood of developing a complete model.\"}", "{\"comment\": \"I want to thank the authors for such in depth response. I may have to say that I align with reviewer Z99m, the point on \\\"Scaling Alone Is Insufficient for Neural Alignment:\\\" has been raised before and unfortunately is not a novel contribution. I was very optimistic by the point raised in \\\"3. Importance of Inductive Biases\\\" but I feel the evidence is rather limited. According to the results on CorNet, it is not super clear that a more \\\"biologically plausible\\\" model can scape the scaling laws easily, and there is little understanding on how to assess these inductive bias, through the lens of the scaling laws, such that is clear the way forward.\\n\\nThank you for providing more results on point 5 and 6. I think there could be something interesting if the scaling laws can be calculated for robust models, or at least an understanding that can motivate novel experiments and efforts on the model and data side to move beyond the saturation mark.\"}", "{\"title\": \"Response to questions 1-2\", \"comment\": \"Thank you for your comprehensive and thoughtful review of our manuscript. Below, we address each of your questions in detail.\\n\\n**1\\\\. Could there be additional context on the novelty of this work relative to existing literature on model size effects?**\\n\\nThank you for emphasizing the importance of contextualizing our work within the existing literature on model size effects. Previous work primarily showed scaling with respect to ground-truth performance. Our work evaluates scaling behavior of models with respect to their internal similarity to brain responses, which has to date not been attempted as far as we are aware. Our study introduces several novel contributions that set it apart from previous research:\\n\\n1. **Comprehensive Model and Dataset Exploration:** While previous studies have examined specific aspects of model scaling, our work systematically explores over 600 models across various architectures and dataset sizes. This extensive evaluation provides a more holistic understanding of how scaling dimensions interact to influence neural and behavioral alignment. \\n2. **Differential Scaling Laws for Neural and Behavioral Alignment:** Our research identifies distinct scaling laws for neural and behavioral alignment, revealing that while behavioral alignment continues to improve with scale, neural alignment reaches a saturation point. This differentiation offers deeper insights into the limitations of current scaling strategies and their differential impacts on various alignment metrics. \\n3. **Scaling Recipe for Optimal Compute Allocation:** We introduce a novel scaling recipe that optimally allocates compute between model size and dataset size to maximize alignment (Fig 4). This practical guideline is a significant advancement, offering actionable recommendations for future model training strategies aimed at enhancing brain alignment. \\n4. **Granular Analysis Across VVS Hierarchy:** Our study delves into how scaling impacts different regions within the primate visual ventral stream (VVS), from V1 to IT and behavioral outputs (Fig 5). This hierarchical analysis reveals that higher-level regions benefit more from scaling, a detail that had not been thoroughly examined in prior work. \\n5. **Public Release of Extensive Resources:** By open-sourcing our training code, evaluation pipeline, and a vast collection of model checkpoints, we provide invaluable resources for the research community. This transparency facilitates reproducibility and enables other researchers to build upon our findings, thereby accelerating progress in the field.\\n\\nThese contributions collectively advance the understanding of how model scaling influences alignment with both neural and behavioral aspects of the primate visual system, offering new perspectives and practical tools that were not previously available.\\n\\n**2\\\\. Is it possible to control inductive biases more rigorously, either quantitatively or qualitatively?**\\n\\nWe have expanded the investigation of inductive biases and their impact on alignment in the updated manuscript. Specifically, we analyze the evolution of neural and behavioral alignment during supervised training across different architectures (Figure 8c). Our results confirm that models with strong priors, such as convolutional architectures, exhibit higher neural alignment at initialization compared to more generalist models, like vision transformers.\\n\\nAdditionally, we present a detailed comparison of alignment at initialization across architectures in Figure 11 (Appendix), further supporting the role of inductive biases in early alignment.\\n\\nWe also explore how inductive biases interact with different training objectives in Figure 8d. For example, while the alignment of a ResNet model shows only slight variation between supervised and self-supervised objectives, the alignment of a ViT model is significantly influenced by the training objective. Notably, the self-supervised objective provides a richer learning signal, resulting in a faster rise in alignment during training. This suggests that inductive biases, combined with learning objectives, play a critical role in shaping alignment dynamics.\"}", "{\"title\": \"Response to weaknesses\", \"comment\": \"> I may have missed it, but did not see mention on self supervised models or robust models and how the scaling laws operate on models trained on these type of frameworks?\\n\\n#### **1\\\\. Self-Supervised Models**\\n\\nTo investigate SSL scaling, we conducted additional experiments using SimCLR (Contrastive Learning) across various model and data scales. In the revised manuscript, **Figure 7a** illustrates the scaling curves for SimCLR models trained on subsets of ImageNet. Additionally, **Appendix Figure 13** provides a per-region breakdown of alignment during SimCLR training.\\n\\n**Findings:**\\n\\n* **Neural Alignment:** Self-supervised models exhibit similar saturation in neural alignment as supervised models. Although SSL enhances the richness and diversity of learned representations, it does not fundamentally alter the scaling laws governing neural alignment with the primate visual ventral stream. \\n* **Behavioral Alignment:** Consistent with supervised models, self-supervised models show continuous improvements in behavioral alignment with scale, following a power-law relationship without noticeable saturation within the tested ranges.\\n\\n**Implications:** \\nThese results demonstrate that SimCLR models provide comparable improvements in behavioral alignment to those achieved by supervised learning. However, the persistent saturation in neural alignment suggests that scaling alone, regardless of the learning paradigm, is insufficient to achieve higher neural alignment with the primate visual system.\\n\\n#### **2\\\\. Robust Models**\\n\\nWe also explored the impact of adversarial training, a robust learning approach, on scaling behavior. Our preliminary experiments focused on adversarial fine-tuning of existing models, as opposed to training adversarially from scratch. The revised manuscript now includes **Figure 7b**, which illustrates the scaling curves of adversarially fine-tuned models.\\n\\n**Findings:**\\n\\n* Adversarial fine-tuning improved both neural and behavioral alignment along the scaling curves estimated for non-adversarially trained models. \\n * These improvements suggest that adversarial training can raise the saturation levels of neural alignment more effectively than conventional training. \\n * Our initial results also indicate that adversarial fine-tuning is more compute-efficient compared to adversarial training from scratch.\\n\\n**Implications:** \\nThese findings highlight adversarial training as a promising direction for overcoming the limitations of current scaling laws in neural alignment. While further investigation is needed, the initial evidence suggests that robust learning techniques may push higher or help break through the observed saturation levels.\"}", "{\"title\": \"Response to rebuttal from reviewers\", \"comment\": \"I thank the authors for responding to our reviews. I still retain that the current submission has interesting points for discussion relevant to ICLR and recommend acceptance. I do acknowledge the points of concern from the more critical reviews of this paper, particularly connection to prior works establishing the relationship between architecture / dataset scaling and fit to neural data. I am not changing my score after reading the other reviews and author response, and believe that the paper meets the bar for presentation at ICLR.\"}", "{\"title\": \"Response to weaknesses 1\", \"comment\": [\"We sincerely appreciate your review and the time you invested in evaluating our manuscript. We strongly feel that this submission is a significant contribution to the NeuroAI field and that it should thus be featured at ICLR. We address your concerns and questions below.\", \"**1\\\\. Novelty and Contribution to the Field**\", \"*Concern:* You expressed concern about the novelty of our findings and how our work contributes to the field, noting similarities with prior studies such as Linsley et al.\", \"*Response:* Previous studies have indeed explored the relationship between task performance and brain alignment. Almost all of them found a continued positive relationship. We are only aware of two papers (Schrimpf et al. 2018 and Linsley et al.) that raised the point this relationship might break for data from a single visual area (IT). Our work provides substantially novel findings in several key aspects:\", \"**Dissociation Between Neural and Behavioral Alignment:** Our findings highlight a clear dissociation between neural and behavioral alignment as models scale (Figs. 1b, 5, 7a, 13). While behavioral alignment continues to improve, neural alignment saturates\\u2014a phenomenon not quantitatively characterized in prior work.\", \"**Unexpected saturation**: Across many domains of brain function, larger and more task-performant models lead to improved alignment with brain data (e.g. vision \\\\[Yamins et al. 2014\\\\], auditory \\\\[Kell et al. 2018\\\\], language \\\\[Schrimpf et al. 2021\\\\], motor \\\\[Vargas et al. 2024\\\\]). It is thus reasonable to believe that continued performance scaling will yield continued brain alignment gains, and in our experience this is the reality for most of the field; for instance virtually all models on Brain-Score are pre-trained machine learning models. We show that scaling the ML way will not improve alignment to the brain\\u2019s visual system, and pinpoint the primary failure cases to early visual processing (see below).\", \"**Graded Effect Across Brain Regions:** We uncover an ordered effect of scaling on alignment across different brain regions in the visual hierarchy (V1, V2, V4, IT; Figs 5, 13), providing insights into how scaling differentially impacts various levels of neural processing.\", \"**Systematic Quantification:** We provide a systematic and controlled investigation into how scaling both model size and dataset size affects neural and behavioral alignment. By training over 600 models under controlled conditions, we eliminate confounding factors present in studies using pre-trained models with varying architectures and training regimes.\", \"**Parametric Scaling Laws:** We introduce and fit parametric power-law scaling laws to our data, offering a predictive framework for how alignment scales with compute and data. This quantitative approach allows us to extrapolate and predict alignment at scales beyond those directly tested. Indeed, we further validated these predictions with unsupervised, multimodal, and adversarially trained variants (new Figs 7ab, 9, and 13).\", \"**The Role of Inductive Biases**: Our analysis demonstrates how inductive biases in neural network architectures (e.g., convolutions in ResNets vs. transformers) affect alignment. In the revised manuscript, we present training dynamics (Figs. 2, 7c, 10, 11 ) showcasing how models with differing biases converge to similar representations over time, albeit starting from distinct initial points. This sheds light on the influence of architectural priors on neural and behavioral alignment.\", \"**Direction Forward:** Our study provides quantitative scaling laws that highlight how computational resources can be effectively allocated between model size and dataset size to optimize neural and behavioral alignment. While these findings underscore the benefits of scaling especially for behavioral alignment (a positive finding), the observed saturation in neural alignment suggests that scaling alone is insufficient to achieve more accurate models of the primate visual ventral stream.\", \"We discuss the necessity of exploring alternative strategies in Discussion *Limitations and Future Directions* section, such as integrating biologically inspired architectural features (e.g., V1 block in VOneNet), co-training with brain data, and developing novel training objectives tailored to better capture neural dynamics. Additionally, our experiments with adversarial fine-tuning demonstrate its potential in raising alignment saturation levels, suggesting that robust training approaches could play a crucial role. As such, combining stronger inductive priors with advanced training paradigms like adversarial fine-tuning offers a promising path toward developing next-generation models that more faithfully mimic biological vision systems.\", \"We believe these contributions offer new insights and a valuable direction for future research, emphasizing that scaling alone may not suffice to improve neural alignment, thereby highlighting the need for novel modeling approaches.\"]}", "{\"title\": \"Response to questions 2-3\", \"comment\": \"**2\\\\. What would be the predictions for a model that closely resembles the visual cortex such as CorNET?**\\n\\nThank you for your suggestion to consider recurrent and more biologically inspired models like CORNet. In our experiments, we included CORNet-S, which appears in several figures\\u2019 legends (Figs 2c, 3b, 4a-b, 6a) except where a specific model architecture was highlighted. Our results show that CORNet-S exhibits scaling characteristics similar to ResNet models. Specifically, CORNet-S follows the same trend: neural alignment plateaus as scale increases, while behavioral alignment continues to improve. This finding suggests that recurrence alone, as implemented in CORNet-S, does not inherently address the scaling limitations for neural alignment. To achieve better neural alignment, it may be necessary to incorporate additional biological principles or mechanisms beyond those currently represented in standard ventral stream-inspired architectures. As a future direction, we propose investigating models with stronger biological constraints, such as VOnenet, which integrates biologically plausible features like in V1 of primate VVS.\\n\\n**3\\\\. Given that the paper focuses on scaling, Have the authors considered how their scaling laws might apply to or change for models pre-trained on much larger datasets like LAION before fine-tuning on ImageNet? This could provide insights into whether the observed plateaus persist across different pre-training regimes**\\n\\nWe have conducted evaluations with 94 pre-trained models from timm library to verify the generalizability of our findings (Fig. 9). These neural networks include CLIP and DINOv2 models, which are larger than our largest trained models and are pre-trained on richer, more diverse datasets such as LAION. We also compared variations of these models, such as a base pre-trained model and its fine-tuned counterpart on ImageNet, to investigate the impact of fine-tuning on scaling behavior. Our results show that models with extensive pretraining achieve enhanced behavioral alignment, likely due to their exposure to richer and more varied data. However, similar to the models trained solely on ImageNet or EcoSet, these pre-trained models still exhibit a saturation effect in neural alignment with the primate visual ventral stream (VVS). This indicates that while larger and more diverse datasets improve behavioral predictability, they do not substantially extend the scaling of neural alignment beyond the observed plateau. In the revised appendix, we provide detailed visualizations of the scaling behavior of pre-trained models in Figure 9\\\\. These curves closely follow the scaling patterns estimated for our trained models in Figure 2c, further validating that the observed saturation is consistent across different pre-training regimes and dataset scales. This reinforces our conclusion that scaling alone is insufficient to overcome the neural alignment limitations and highlights the need for alternative approaches.\"}", "{\"summary\": \"In this paper, the authors study the relationship between the size / compute requirement of popular neural network architectures and their training dataset sizes vs alignment to the biological ventral visual stream. The authors analyze the alignment of various architectures to the primate VVS using the publicly available Brain-Score benchmark and claim that (1) scaling models by increasing parameter count produces diminishing neural alignment beyond a saturation point in model size, but behavioral alignment continues to increase with model size, (2) Alignment scales with training dataset size, (3) Higher visual areas in the cortical hierarchy show stronger gains in alignment with respect to scaling.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"This paper sheds light on the similarity of neural network representations to biological visual representations as a function of model size, compute, and training dataset size. The authors have presented these results in a sound theoretical framework by drawing inspiration from analyses of neural scaling laws.\", \"It is super interesting that different areas of the ventral visual stream have varied effects to scaling of neural architectures/datasets. I have not seen this in prior work to the best of my knowledge and this will raise interesting discussions at ICLR.\", \"I appreciate that the paper is well-written, the figures are legible and accompanied with reasonably detailed captions.\"], \"weaknesses\": [\"**Lacking evaluation of what model behaviors give rise to alignment.** My main point of feedback to further improve this paper is to address what other factors of artificial neural networks contribute to enhancing similarity to biological vision. It is interesting that there exist scaling laws between model / dataset sizes and neural / behavioral alignment, but this has already been documented in prior studies. I urge the authors to further study the qualitative factors (for e.g. sensitivity to the same spatial frequencies that humans are sensitive to) that give rise to enhanced similarity between ANNs and human vision.\", \"**Missing evaluation of more recent multimodal models.** There has been a surge in multimodal vision language models that, if evaluated in the same framework established by this paper, would produce really intriguing findings on model scaling and alignment. I encourage the authors to include publicly available large vision language models to increase the impact of their findings, as these VLMs are more widely in use now.\"], \"questions\": [\"Would the authors like to highlight how different training signals would influence alignment to brain / behavior? Humans have a rich multimodal perception of the world, they use depth perception, and predominantly learn without supervision. Are the authors able to tease apart the effects of any such factors in their analyses?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to questions\", \"comment\": \"*a) On Figure 6b, could focusing on the scaled-up model of behavior strengthen the paper?*\\n\\n*Response:* Thank you for your positive remark about Figure\\u00a06b. We are pleased that you find this aspect of our work engaging. We understand that you suggest focusing on the \\\"scaled-up model of behavior\\\" to strengthen the paper. Could you please clarify what you mean by \\\"scaled-up model of behavior\\\"? Are you proposing that we place greater emphasis on the behavioral alignment achieved at larger scales, or perhaps delve deeper into how scaling impacts behavioral predictions of the models?\\n\\nWe greatly appreciate your insightful feedback and look forward to your clarification to help us enhance our manuscript further.\\n\\n*b) Why are neural scaling laws different for different brain regions and behavior?*\\n\\n*Response:* The differing scaling laws across brain regions and behavior likely stem from the distinct computational functions and complexities associated with each region. Higher-level areas like IT and behavioral outputs involve more abstract and integrative processing, which may benefit more from increased model capacity and data diversity. In contrast, early visual areas like V1 and V2 process more basic visual features and may reach an alignment plateau as they are already well-modeled by simpler architectures or smaller scales. Additionally, the inductive biases inherent in certain architectures may align more closely with the computational principles of specific brain regions, influencing how scaling affects their alignment. In the revised manuscript, we further investigate how inductive biases of models influence the alignment at initialization and during training (Fig 7c, 10, 11).\"}", "{\"title\": \"Response.\", \"comment\": \"This is great, thanks.\"}", "{\"title\": \"Response to questions\", \"comment\": \"> Would the authors like to highlight how different training signals would influence alignment to brain / behavior? Humans have a rich multimodal perception of the world, they use depth perception, and predominantly learn without supervision. Are the authors able to tease apart the effects of any such factors in their analyses?\\n\\nIn the updated manuscript, we further investigate the impact of self-supervised learning methods and adversarial fine-tuning on alignment. Our findings are further corroborated in Figures 7a and 13, where supervised training results align closely with those from self-supervised SimCLR training. As shown in the new Figure 7d, models trained with self-supervised objectives like SimCLR and DINO exhibit different alignment dynamics compared to those trained with supervised learning. Specifically, Vision Transformer models (ViT-S) trained with self-supervised methods achieve similar levels of alignment more efficiently than when trained with supervised objectives. This suggests that the richness and diversity of feedback provided by self-supervised learning facilitate faster and more effective alignment with neural representations. Figure 12 contrasts per region alignment of different objectives, which suggests that certain self-supervised models such as DINO can outperform supervised models in behavioral alignment.\\n\\nAdditionally, we examine the effects of adversarial fine-tuning on alignment performance. Our findings indicate that adversarial training can enhance neural alignment beyond the saturation levels observed with standard training methods (Figure 7b). This suggests that introducing adversarial perturbations during training encourages models to learn more robust and generalized features that align more closely with neural representations in the primate visual system.\"}", "{\"summary\": \"This paper explores how varying model sizes impact neural and behavioral alignment, seeking insights into the relationship between model architecture and its ability to mimic human-like neural responses and behaviors.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The core claim\\u2014model size influencing alignment\\u2014is well supported by the results.\\n\\nInvestigating neural and behavioral alignment is a relevant area with potential applications for improving model interpretability and guiding architecture design.\\n\\nThe study contributes to understanding the role of model scale in alignment, a valuable area for both theoretical insights and practical applications in AI research.\", \"weaknesses\": \"Inductive biases might need better control, either quantitatively or qualitatively, to improve result clarity.\", \"minor_issues\": \"typo at l100 (\\u201cecology\\u201d), unclear reference in l130 (\\u201cUtah\\u201d), and Fig 1 could specify the saturation value.\\n\\nBenchmark sample size for V1 and V2 is relatively small (315), which may impact result generalizability.\\n\\nEquation 7\\u2019s clarity is limited without referencing equations 8 and 9; introducing C(N, D) = 6ND earlier could help.\", \"questions\": \"Could there be additional context on the novelty of this work relative to existing literature on model size effects?\\n\\nIs it possible to control inductive biases more rigorously, either quantitatively or qualitatively?\\n\\nIn Figure 1, what value does alignment saturation reach?\\n\\nIs \\u201cUtah\\u201d in l130 a reference or typo?\\n\\nWould increasing the benchmark sample size for V1, V2 make the results more robust?\\n\\nCould the paper benefit from additional discussion on neural versus behavioral alignment, and how better control of inductive biases might enhance interpretability?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to weaknesses 2\", \"comment\": \"**2\\\\. Line Fits and Interpretation of Results**\\n\\n*Concern:* You noted that some of the line fits, particularly for neural data, may be overly optimistic and may not accurately reflect non-monotonic trends in the data.\\n\\n*Response:* Thank you for bringing this to our attention. We selected power-law curves based on their widespread use and interpretability in machine learning scaling law literature. However, we recognize that alternative parametric forms might better capture the nuances of our data. We are open to exploring other functional forms, such as sigmoid functions or piecewise linear models, to potentially provide a better fit to the observed trends. Nonetheless, our choice was motivated by the balance between fit quality and the ability to derive meaningful scaling exponents, which facilitate the optimization of compute allocation. We agree that the variability in the neural alignment data warrants careful consideration. In response, we have added bootstrapped confidence intervals to our plots to represent the variability and uncertainty in the fits.\\n\\nWe believe these contributions offer new insights and a valuable direction for future research, emphasizing that scaling alone may not suffice to improve neural alignment, thereby highlighting the need for novel modeling approaches.\"}", "{\"title\": \"Response to weaknesses\", \"comment\": \"> Minor issues: typo at l100 (\\u201cecology\\u201d), unclear reference in l130 (\\u201cUtah\\u201d), and Fig 1 could specify the saturation value.\\n\\nThank you for pointing these out, but we believe both are correct: \\\"ecologically viable tasks\\\" refers to tasks that primates would encounter in their natural environment; \\\"Utah array\\\" is the name of the recording device used to retrieve brain data (https://blackrockneurotech.com/products/utah-array/)\\n\\n> Equation 7\\u2019s clarity is limited without referencing equations 8 and 9; introducing C(N, D) = 6ND earlier could help.\\n\\nThank you for highlighting the clarity issue with Equation\\u00a07. We appreciate your suggestion to introduce the relationship C(N,D)=6ND earlier in the manuscript. In the revised version, we have now introduced C(N,D)=6ND prior to describing Equation\\u00a07 (lines\\u00a0210-212).\"}", "{\"metareview\": \"The paper dives deep into the alignment of neural network models and neural response patterns in the visual ventral system. After checking myself, the quality of the paper and the visuals is top, and the findings are indeed intriguing and thought provoking, showing a great deal of work and craftmanship. On the other hand, the majority of reviewers point out that this is definitely positive, but the paper is missing actionable insights that set it apart from other papers, which have already published on this direction. This, with the fact that there were 7 (!!) updates on the manuscript point to having this paper revised and resubmitted so that the paper shines.\", \"additional_comments_on_reviewer_discussion\": \"\\\"I appreciate the efforts from the authors and I think it exhibits a lot of careful thought and diligent work. My main concern is that the core claim of the paper seems to have limited impact in the field, it is not clear how to move forward. The study then turns into assessing the ground in current architectures, which seem have been addressed in other pieces of work previously published. There are few really good leads that can turn this into a very impactful paper, but currently it seems may be limited for this venue. Looking forward to hear from the other reviewers.\\\"\\n\\n\\\"Agree with this take. On the one hand, the novelty is debatable and it doesn't provide a path forward. On the other hand, I can imagine citing this paper and the experiments are well done (though I still have issues with the line fits/scaling laws).\\\"\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"The authors investigate so-called neural scaling laws for predicting visual behavior and neural activity. \\\"Scaling laws\\\" are empirical trends that show a relationship between model scale (e.g., compute used or amount of data used in training) and its loss on a pretraining task. Here, the authors show different functional forms of scaling laws for predicting neural activity vs. behavior, where the latter is far more promising than the former.\\n\\n**Update**\\nI'm on the fence with this paper. I think there's tons of well-done experiments, and I think the message is important to the field of NeuroAI albeit not totally a novel one. I think the line fits are also still problematic and telling a story that's not totally backed up by the data, although I appreciate that the authors are trying to establish a parallel with work in AI on scaling laws. If there were a clear direction forward then this would be a no-brainer accept. As is, I believe it's borderline. I am increasing my score to reflect this.\\n\\nAlso on a separate note, my apologies to the authors for neglecting to respond to all of their points. I was confused by the threading of the responses and mistook the authors' responses to gmHr for responses to my own questions.\", \"soundness\": \"2\", \"presentation\": \"4\", \"contribution\": \"2\", \"strengths\": \"The authors completed an extensive sweep through model architectures, compute, and data budgets, in order to give a detailed view of how model scale relates to neural and behavioral brain scores. The key findings here are important (although with debatable novelty): (1) Neural fits asymptote or worsen with scale, (2) behavioral fits are linear with scale (although scale alone appears to be insufficient), (3) the ceiling and form of scaling laws is different for each visual area region that was investigated. Overall, this is a nice capstone on BrainScore, and perhaps is most notable for showing how methods from AI are not always applicable for explaining brain and behavior.\", \"weaknesses\": \"1. The power of scaling laws in domains like language (IMO) is that they imply \\\"all you need is scale.\\\" That is, and in the spirit of the bitter lesson, there are no conceptual barriers to achieving a criterion level of performance, only engineering ones. If this were the case in brain science it would be a true world changer. But as this paper (and others which were cited) show, this is not the case. DNNs + scale are not the solution to explaining the variance in brainscore visual system recordings. In that sense I see a large overlap between the findings and result of [1] in which they found a trade-off between ImageNet performance and BrainScore fits. In both cases, these are null results. It is great to show this result, but the lack of a direction forward is concerning.\\n\\nTo drive the point home, in Fig 3, the authors show that training on ImageNet21k (but curiously not WebVision which has more images) leads to better fits. Indeed this would seem to be a scaling law... but the effect size makes it impractical at best: the model maxes out around 0.45 alignment even after all of that data.\\n\\nFor these reasons I struggle to see how this paper makes a strong contribution to the field. It feels better served as a memo or blog post than a conference or journal paper.\\n\\n2. I think some of the line fits are overly optimistic. For example, in Fig 1, the neuro line is monotonically increasing. But if I squint and just look at the dots, it looks more like a subtle decrease in fits, on average, as a function of compute. This issue is in many of the plots. This relates to my thoughts in (1) about what this all means and whether or not the findings are novel. See fig 2 ViT behavioral line fits for an example where it's not just for neural data. I am marking down the \\\"Soundness\\\" of the paper because of these line fits, but to be honest I don't have any great suggestions about how to improve the fits while maintaining interpretable \\\"laws\\\" when you have what look like non-monotic changes like with the Neural data in Fig 1c.\\n\\n3. The y limits of the plots should be fixed to one range. It looks like 0-0.7 captures everything. Theres too much bouncing around between different ranges in different subplots. Also could you label what dataset the validation accuracy is derived from on plots where you report it?\\n\\n[1] Linsley et al. Performance-optimized deep neural networks are evolving into worse models of inferotemporal visual cortex.\", \"questions\": \"1. On Figure 6b, that's a beautiful correlation. How far can you take it out? Just eyeballing I'd guess it would get near 0.7. Perhaps a pivot for the paper, to get the positive result I think it needs, would be to focus on this scaled-up model of behavior? Just a thought.\\n\\n2. Why do you think neural scaling laws are different for different brain regions and also for behavior? This is a complex question of course, and I don't expect a definitive answer, but perhaps there's something interesting here.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to questions 3-5\", \"comment\": \"**3\\\\. In Figure 1, what value does alignment saturation reach?**\\n\\nThank you for seeking clarification on the alignment saturation values depicted in Figure 1\\\\. In our study, each scaling curve asymptotically approaches a constant value as the scale (compute, model size, or dataset size) increases indefinitely. Specifically:\\n\\n* **Behavioral Alignment:** The alignment score for behavioral alignment approaches a saturation value of **1** in the limit of infinite scaling. This indicates perfect alignment between the model's behavioral predictions and primate behavioral data when unlimited compute and data resources are available. \\n* **Neural Alignment:** The alignment score for neural alignment reaches a saturation value of approximately **0.48**. This plateau suggests that beyond a certain scale, increasing compute, model size, or dataset size yields diminishing returns in terms of improving neural alignment with the primate ventral visual stream.\\n\\nThese saturation values are derived from the fitted power-law curves and represent the theoretical maximum alignment achievable under our current model architectures and training datasets.\\n\\n**4\\\\. Would increasing the benchmark sample size for V1, V2 make the results more robust?** \\n\\nWe have conducted additional evaluations using more extensive benchmarks available on the Brain-Score platform to assess the robustness of our findings. Figure 8 in the Appendix demonstrates that the results from private benchmarks correlate highly with the public benchmarks used in this study, providing strong evidence of consistency. These supplementary tests validate the reliability of our original findings and suggest that the trends observed in neural and behavioral alignment are robust even when larger sample sizes or additional data are incorporated.\\n\\n**5\\\\. Could the paper benefit from additional discussion on neural versus behavioral alignment, and how better control of inductive biases might enhance interpretability?**\\n\\nIn the new \\\"Generalization Beyond Supervised Training\\\" section of the Discussion, we investigate how different training signals influence alignment with the brain and behavior. Figure 7a confirms our findings by showing that in supervised training, neural alignment saturates while behavioral alignment continues to improve with increased compute, with a detailed breakdown presented in Figure 13\\\\. Additionally, our experiments demonstrate that models trained with self-supervised learning methods, such as SimCLR and DINO, achieve similar levels of alignment more efficiently than those trained with supervised learning, particularly for architectures like Vision Transformers that have weaker inductive biases (Figures 7d and 12). This suggests that rich and diverse learning signals facilitate faster and more effective alignment with neural representations. Furthermore, adversarial fine-tuning enhances neural alignment beyond the saturation levels observed with standard training methods (Figure 7b), indicating that introducing adversarial perturbations encourages models to learn more robust features aligned with neural processing.\"}" ] }
4ftMNGeLsz
FedGO : Federated Ensemble Distillation with GAN-based Optimality
[ "Won-Jun Jang", "Hyeon-Seo Park", "Si-Hyeon Lee" ]
For federated learning in practical settings, a significant challenge is the considerable diversity of data across clients. To tackle this data heterogeneity issue, it has been recognized that federated ensemble distillation is effective. Federated ensemble distillation requires an unlabeled dataset on the server, which could either be an extra dataset the server already possesses or a dataset generated by training a generator through a data-free approach. Then, it proceeds by generating pseudo-labels for the unlabeled data based on the predictions of client models and training the server model using this pseudo-labeled dataset. Consequently, the efficacy of ensemble distillation hinges on the quality of these pseudo-labels, which, in turn, poses a challenge of appropriately assigning weights to client predictions for each data point, particularly in scenarios with data heterogeneity. In this work, we suggest a provably near-optimal weighting method for federated ensemble distillation, inspired by theoretical results in generative adversarial networks (GANs). Our weighting method utilizes client discriminators, trained at the clients based on a generator distributed from the server and their own datasets. Our comprehensive experiments on various image classification tasks illustrate that our method significantly improves the performance over baselines, under various scenarios with and without extra server dataset. Furthermore, we provide an extensive analysis of additional communication cost, privacy leakage, and computational burden caused by our weighting method.
[ "Federated learning", "ensemble distillation", "data heterogeneity", "generative adversarial network" ]
Reject
https://openreview.net/pdf?id=4ftMNGeLsz
https://openreview.net/forum?id=4ftMNGeLsz
ICLR.cc/2025/Conference
2025
{ "note_id": [ "y3WEkIuU8y", "tYe0NcIyuC", "tOtWQpO9H6", "n9oaHV35Jr", "lluCgNgriC", "j4UiDT1PNO", "VbNCEmy4Ck", "RmieC3a8AN", "LrZIPfvj9N", "JbW2KxJsLZ", "F2ArjN8lbf", "DJdQtOBpMK", "BfMzHNH6yR", "873hrB4Wzd", "7bIvUU7iPW", "1ivpIKHTIw", "1b0xV0fnn4" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_review" ], "note_created": [ 1732357499533, 1732520308244, 1733310523399, 1730662552486, 1732181558919, 1732533315115, 1732676843980, 1730691065975, 1737524278523, 1731671388973, 1733310710276, 1732883246405, 1732181536509, 1732863274517, 1734849537649, 1732181323388, 1730646675419 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission13731/Reviewer_KVTQ" ], [ "ICLR.cc/2025/Conference/Submission13731/Reviewer_Gtch" ], [ "ICLR.cc/2025/Conference/Submission13731/Authors" ], [ "ICLR.cc/2025/Conference/Submission13731/Reviewer_5yYt" ], [ "ICLR.cc/2025/Conference/Submission13731/Authors" ], [ "ICLR.cc/2025/Conference/Submission13731/Authors" ], [ "ICLR.cc/2025/Conference/Submission13731/Authors" ], [ "ICLR.cc/2025/Conference/Submission13731/Reviewer_Gtch" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission13731/Authors" ], [ "ICLR.cc/2025/Conference/Submission13731/Authors" ], [ "ICLR.cc/2025/Conference/Submission13731/Authors" ], [ "ICLR.cc/2025/Conference/Submission13731/Authors" ], [ "ICLR.cc/2025/Conference/Submission13731/Authors" ], [ "ICLR.cc/2025/Conference/Submission13731/Area_Chair_QbY1" ], [ "ICLR.cc/2025/Conference/Submission13731/Authors" ], [ "ICLR.cc/2025/Conference/Submission13731/Reviewer_KVTQ" ] ], "structured_content_str": [ "{\"comment\": \"Thank you for your response. However, your assurance does not align with the statements in your paper. For instance, in Figure 4, I can clearly see that FedGO requires a pre-trained generator (i.e., Generator preparation) and the server dataset. Moreover, FedGO necessitates the pre-training of additional client discriminators using local private data on the client side, which introduces additional computational and memory costs for the deployment of FedGO. This is because, in actual FL scenarios, the computational and memory resources on the client side are scarce. Therefore, I maintain a skeptical attitude towards this work and keep my current score unchanged.\"}", "{\"comment\": \"Thanks for the reply!\\n\\nThe additional experiments addressed some of my concerns. However, I'm still concerned about the extra overhead of FedGO. \\nWe acknowledge that FedGO has excellent performance and satisfactory additional overhead in scenarios other than the data-free case (G3) + (D3). However, the extra overhead of FedGO in data-free scenarios is impossible to ignore, which greatly reduces the usability of FedGO because we mainly focus on data-free scenarios when discussing federated learning.\\n\\nTherefore, I finally decided to raise my score to 5.\"}", "{\"comment\": \"Thank you for your detailed review and for highlighting concerns about the additional computation overhead in the data-free (G3)+(D3) scenario. We appreciate the opportunity to address this important point.\\n\\nWe conducted additional experiments to validate the effectiveness of data-free FedGO when significantly reducing computation overhead. To reduce computation overhead, we used smaller structures for the GAN (to train the generator via FL) and the client-side discriminator (for weighting after generator training), and reduced the number of local epochs and the number of communication rounds for training the generator in a data-free manner. Specifically, for the GAN, we utilized a simplified DCGAN structure based on [this implementation](https://github.com/Ksuryateja/DCGAN-MNIST-pytorch/blob/master/gan_mnist.py), modifying the number of channels to 3. For the client-side discriminator, we adopted the CNN+MLP architecture described in our earlier response. Furthermore, unlike the submitted paper where the GAN was trained in the pre-FL stage with 30 local epochs and 100 communication rounds, we prepared the GAN in this experiment using only 5 local epochs and 5 communication rounds. Then, we compared the performance of FedGO after 50 communication rounds on CIFAR-10 with $\\\\alpha = 0.1$ for 100 clients, to other data-free FL algorithms FedAVG, FedProx, FedGKD, SCAFFOLD [1], and FedDisco [2]. The last two baselines have been newly added to ensure a comprehensive comparison.\\n\\n\\\\\\\\begin{array}{|l|c|c|c|c|c|c|}\\n\\\\\\\\hline\\n& & & & && \\\\\\\\text{FedGO} \\\\\\\\\\\\\\\\\\n & \\\\\\\\text{FedAVG} & \\\\\\\\text{FedProx} & \\\\\\\\text{Scaffold} & \\\\\\\\text{FedGKD} & \\\\\\\\text{FedDisco} & \\\\\\\\text{(G3)+(D3)}\\\\\\\\\\\\\\\\\\n\\\\\\\\hline\\n\\\\\\\\text{Server Test Accuracy} & 33.96 \\\\\\\\pm 4.20 & 36.80 \\\\\\\\pm 3.96 & 37.94 \\\\\\\\pm 2.73 & 37.2 \\\\\\\\pm 3.21 & 36.53 \\\\\\\\pm 2.96 & \\\\\\\\textbf{40.45} \\\\\\\\pm 4.77 \\\\\\\\\\\\\\\\\\n\\\\\\\\text{Client-side MFLOPs} & 1.667 \\\\\\\\text{e+10} & 1.667 \\\\\\\\text{e+10} & 1.667 \\\\\\\\text{e+10} & 3.336 \\\\\\\\text{e+10} & 1.667 \\\\\\\\text{e+10} & 1.671 \\\\\\\\text{e+10} \\\\\\\\\\\\\\\\\\n\\\\\\\\hline\\n\\\\\\\\end{array}\\n\\nAs shown in the table above, **FedGO achieves superior performance compared to baseline algorithms while maintaining minimal additional client-side computation.** Specifically, the MFLOPs required to prepare the generator and discriminator are only **4.682e+7**, which is less than **3%** of the MFLOPs required for classifier training over 100 communication rounds (**1.666e+10**).\\n\\nThis efficiency is achieved through the use of a significantly smaller GAN structure, demonstrating that **even with minimal computational and communication requirements, our approach delivers notable performance improvements**. We hope this addresses your concerns and highlights the practical viability of FedGO in the data-free (G3)+(D3) scenario.\\n\\n[1] Karimireddy, Sai Praneeth, et al. \\\"Scaffold: Stochastic controlled averaging for federated learning.\\\"\\u00a0*International conference on machine learning*. PMLR, 2020.\\n\\n[2] Ye, Rui, et al. \\\"Feddisco: Federated learning with discrepancy-aware collaboration.\\\"\\u00a0*International Conference on Machine Learning*. PMLR, 2023.\"}", "{\"summary\": \"The paper introduces a new approach to address the issue of data heterogeneity in federated learning. By applying Generative Adversarial Network (GAN) techniques to federated ensemble distillation, the paper proposes a near-optimal weighting method that enhances the training process of the server model. Extensive experimental validation demonstrates significant improvements in model performance and convergence speed across various image classification tasks. Moreover, the study provides an in-depth analysis of the potential additional communication costs, privacy leaks, and computational burdens introduced by this method, showcasing its practicality and flexibility in protecting data privacy and enhancing system efficiency.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"This paper demonstrates originality through its innovative integration of GAN-based techniques with federated ensemble distillation. The use of discriminators trained at the client side to optimize the weighting of client contributions during the distillation process is a novel approach that has not been extensively explored in previous federated learning research.\\n\\nThe method's originality is further enhanced by its theoretical grounding, which employs results from GAN literature to develop a provably near-optimal weighting method. \\n\\nThe experimental setup is well thought out\", \"weaknesses\": \"The paper claims near-optimal performance based on theoretical justifications rooted in GAN literature. However, these claims might depend heavily on certain idealized assumptions about data distributions and discriminator performance. Real-world deviations from these assumptions could lead to suboptimal performance. The paper does not explain how to select discriminator architectures.\", \"questions\": \"1. In introduction on page 2, Our main contributions are summarized in the following: \\\"Federated Ensemble Distillation\\\" instead of \\\"Ferated Ensemble Distillation\\\".\\n\\n2. In theoretical analysis, near-optimal performance is heavily affected on discriminator performance. I do not understand how to select the discriminator architectures? Can you give me some detailed description?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"4. **Experimental settings**\\n\\nWe agree that it is important to clearly state the experimental settings of the baselines. As detailed in Appendix E.2 of the initially submitted paper, we have thoroughly documented the experimental settings of our FedGO and baseline methods. For all the baseline experiments, the random seeds, data splits, and model structures were the same as FedGO, and some hyperparameters specific to each baseline algorithm were optimized using grid search to select the best-performing values. If there are additional details that you believe should be reported, please let us know, and we will be happy to include them.\\n\\n5. **Additional Overhead due to Utilizing GAN**\\n\\nWe acknowledge the reviewer\\u2019s concern that the additional computational and communication overhead introduced by the GAN-based approach could present challenges in FL scenarios, particularly under strict resource constraints. However, given the nature of FL, the server often communicates with numerous clients, aggregates client models, and trains the server model. As such, many studies assume\\u2014and often find in practice\\u2014that the server typically has significantly more computational and communication resources than clients. Building on this assumption, several federated ensemble distillation studies focus on imposing additional computation on the server, rather than on the clients, to achieve faster convergence for the same communication budget.\\n\\nOur FedGO algorithm, which is GAN-based, implements a **provably near-optimal weighting method** with **minimal additional client-side computation and communication overhead**. To substantiate this, we provided extensive analyses of the communication, privacy, and computational costs in Section 3.2, Table 1, Appendix G, and Table 9 in the initially submitted paper. \\n\\nLet us first focus on the scenarios other than the data-free case (G3) + (D3). As shown in Table 9, FedGO imposes only around 2% additional computational cost on the client side compared to FedAVG/FedDF. In particular, in the (G2) + (D1) scenario, it incurs less than 1.5% additional server-side computational cost compared to FedDF. Regarding communication cost, the additional overhead introduced by FedGO is also negligible, compared to FedAVG/FedDF. The only additional communication required is a one-shot exchange of the generator and discriminator between the server and clients. In our experiments, the parameters of the ResNet-18 classifier were approximately 90MB when stored as a PyTorch `state_dict`. In comparison, the generator and discriminator models are 4.61MB and 2.53MB, respectively. Over 100 communication rounds, during which ResNet-18 is transmitted repeatedly, the additional communication introduced by FedGO is nearly negligible.\\n\\nFurthermore, the total computational cost in Table 9 assumes 20 clients. On the server side, the most computationally demanding case for utilizing FedGO on the server side occurs when training the generator in the (G1)+(D1) scenario, which accounts for approximately 63.5% of the total computational cost. However, once the generator is trained in the pre-FL stage, no further computation is needed. In real-world scenarios, where 100+ clients may participate in FL, the computation required for pseudo-labeling scales linearly with the number of clients. Consequently, as the number of clients increases, the relative proportion of the computational cost for training the generator decreases. Additionally, as shown in Figure 3, FedGO demonstrates significant performance advantages and faster convergence rates compared to baseline algorithms as the number of clients increases. This suggests that, in terms of computational and communication cost efficiency, **FedGO may be a more effective algorithm for achieving the same performance**.\\n\\nFor the data-free FedGO with (G3)+(D3), we recognize that such data-free approaches impose non-negligible communication and computational costs on both the client and server sides, potentially limiting their applicability in resource-constrained environments. However, as the computational and communication capabilities of devices continue to improve, many recent studies\\u2014like those referenced in our initially submitted paper\\u2014are actively exploring data-free FL approaches. In this context, our study aligns with the growing body of research pushing the boundaries of FL capabilities while addressing modern hardware advancements. We believe our work contributes meaningfully to this evolving field and offers a promising avenue for further exploration.\"}", "{\"comment\": \"We appreciate your detailed feedback and would like to clarify a potential misunderstanding regarding the data-free approach proposed in our work. In the data-free scenario (G3)+(D3), **FedGO does not require a pre-existing server dataset or pretrained generator before the pre-FL stage**. Instead, both **the generator and the distillation dataset are constructed during the pre-FL stage using a fully data-free methodology**. Additionally, while client-side resources are used to train discriminators, **the computational and memory overhead has been carefully evaluated to be minimal,** as detailed below.\\n\\n1. **Server Dataset and/or Pretrained Generator** \\n \\n As outlined in Section 3.2 of our paper, under the data-free scenario (G3) + (D3), the generator is trained using FL techniques, such as FedGAN. This approach **does not require any public dataset, unlabeled data available only on the server, or prior knowledge of client data**. The generator then produces synthetic data, which is used for ensemble distillation.\\n \\n To further clarify, when we state that a pretrained generator is not required in the data-free scenario, we specifically contrast this with scenarios like (G2) in Table 1 of Section 3.2. In (G2), a pretrained generator (e.g., StyleGAN trained on large, public datasets) is necessary, and this generator must be available prior to the pre-FL stage. In contrast, **our data-free approach (G3) avoids this requirement entirely by training the generator dynamically within the FL framework**, thereby eliminating the dependency on large external datasets or pretrained models.\\n \\n2. **Clarification of Figure 4**\\n \\n Fig. 4 illustrates that the generator is prepared according to one of the three methods, (G1, G2, or G3). Here, **(G3) represents the previously mentioned data-free approach**, which does not require any public dataset, unlabeled data available only on the server, or prior knowledge of client data.\\n \\n3. **Client-Side Resources**\\n \\n We acknowledge your concern regarding the additional computational overhead introduced by the pre-training of client discriminators, especially in resource-constrained FL scenarios.\\n \\n Our FedGO algorithm incorporates a provably near-optimal weighting method, which minimizes additional client-side computational overhead. To support this, we provided extensive analyses of the communication, privacy, and computational costs in Section 3.2, Table 1, Appendix G, and Table 9 of the submitted paper.\\n \\n Specifically, for scenarios other than the data-free case (G3) + (D3), Table 9 shows that FedGO incurs only approximately 2% additional client-side computational cost compared to FedAvg/FedDF. In the (G2) + (D1) scenario, it imposes less than 1.5% additional server-side computational cost compared to FedDF. These results demonstrate that FedGO operates efficiently while maintaining strong performance.\\n \\n Additionally, during this rebuttal phase, we conducted further experiments on the performance of the FedGO algorithm using different discriminator architectures. We found that even with a client discriminator structure that has less than 1/4 the number of parameters and forward FLOPs of the discriminator used in the initially submitted paper, FedGO still achieves nearly identical performance. This result will be included in the revised paper. These results indicate that FedGO may require even less additional client computation and memory overhead than the results reported in Table 9 (which were already very small), while maintaining the same server model performance.\\n \\n\\nWe hope this explanation resolves your concerns about the necessity of server datasets or pretrained generators in the data-free scenario. The flexibility of FedGO to operate effectively in both data-free and auxiliary-data settings ensures its adaptability to various FL environments, including those with limited resources or stringent privacy requirements.\\n\\nThank you for your continued engagement. We are happy to address any further questions or concerns you may have.\"}", "{\"comment\": \"Thank you for reviewing our paper and recognizing the value of our results. We also appreciate you pointing out the typo in the introduction; we have corrected it in the revised paper.\\n\\nWe greatly appreciate your question about the selection of discriminator architectures. To address it, we have conducted experiments with the following three different client discriminator architectures:\\n\\n- **CNN**: The baseline architecture used in the submitted paper. It consists of four convolutional layers.\\n- **CNN+MLP**: A variation of the CNN architecture, where the last two convolutional layers in the CNN are replaced by a single multi-layer perceptron (MLP) layer, resulting in a three-layer shallow network.\\n- **ResNet**: A deeper architecture based on ResNet-8, an 8-layer residual network.\\n\\nThe table below summarizes the number of parameters, the number of forward computation FLOPs, and the server model's test accuracy on CIFAR-10 with $\\\\alpha = 0.1$ at the 100-th communication round when using these three different discriminator architectures.\\n\\n\\\\\\\\begin{array}{|l|ccc|}\\n\\\\\\\\hline\\n&&\\\\\\\\text{FedGO}&\\\\\\\\\\\\\\\\\\n\\\\\\\\text{Discriminator Sturcture} & \\\\\\\\text{CNN} & \\\\\\\\text{CNN+MLP} & \\\\\\\\text{ResNet} \\\\\\\\\\\\\\\\\\n\\\\\\\\hline\\n\\\\\\\\text{Number of Parameters} & 662,528 & 142,336 & 1,230,528\\\\\\\\\\\\\\\\\\n\\\\\\\\text{MFLOPs} & 17.6 & 9.18 & 51.1 \\\\\\\\\\\\\\\\\\n\\\\\\\\text{Server Test Accuracy} & 79.62 \\\\pm 4.36 & 79.71 \\\\pm4.71 & 78.73 \\\\\\\\pm 5.03 \\\\\\\\\\\\\\\\\\n\\\\\\\\hline\\n\\\\\\\\end{array}\\n\\nAs seen in the table above, all discriminator architectures achieve nearly identical server model performance. These results demonstrate that the performance of FedGO algorithm is robust to different discriminator architectures and maintains strong performance regardless of the chosen structure. \\n\\nTherefore, we recommend the CNN+MLP discriminator, as it significantly reduces client-side computation and memory overhead while delivering competitive results. This flexibility enables FedGO to effectively adapt to diverse FL scenarios with varying resource constraints.\\n\\nWe hope this response clarifies your concerns and highlights the adaptability of our approach. Thank you again for your constructive feedback. We look forward to addressing any additional questions you might have.\"}", "{\"summary\": \"This paper proposed a novel federated ensemble distillation approach that utilizes generative adversarial networks (GANs) to address the challenges posed by data diversity across clients. Specifically, the proposed approach employs GANs to optimize the weighting of client predictions, thereby improving the quality of pseudo-labels generated during the ensemble distillation process. The paper provides theoretical insights that establish the effectiveness of the proposed method. Comprehensive experiments demonstrate that the proposed approach outperforms existing methods in robustness against data heterogeneity.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. The paper provides a theoretical foundation for the proposed approach, which validates its effectiveness and enhances its credibility.\\n2. The paper analyzes communication, privacy, and computational complexity within different scenarios, providing valuable insights for implementing the proposed approach.\", \"weaknesses\": \"1. This paper needs to demonstrate the effectiveness of the proposed approach on different model structures, such as VGG and MobileNet.\\n2. The effectiveness of the proposed method relies on the quality of the discriminator and generator. The paper needs to conduct related ablation studies.\\n3. This paper should conduct ablation studies to analyze the impact of hyperparameters (e.g. $E_s$ and $E_d$) on the effectiveness of the approach.\\n4. The experimental settings of the baselines are not clearly stated, and it is important to clarify the fairness of the experimental comparison.\\n5. The additional computational and communication overhead introduced by the GAN-based approach may not be suitable for FL scenarios, particularly those with strict resource constraints.\", \"questions\": \"Please refer to the weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"First, we would like to express our gratitude for the time and effort you put into reviewing our paper. We agree with the feedback that the use of an unlabeled dataset on the server (no need to share with clients for FedGO) or pretrained generator may not always be feasible. In fact, our paper presents a solution for such situations: we developed a method for our FedGO algorithm to operate in a data-free setting without the need for an additional dataset or pretrained generator, and we included both the method and experimental results in the paper.\\n\\n\\nAs outlined in Section 3.2, we propose a data-free approach: when the server does not have a dataset (S2), the generator is trained using FL techniques like FedGAN (G3), and a distillation dataset is created with that generator (D3). The experimental results for this approach are provided in Appendix F.3. Furthermore, we offer a comprehensive comparison and analysis in Appendix G, addressing communication, privacy, and computation aspects for scenarios involving an additional dataset, pretrained generator, or a data-free approach.\\n\\n\\nTo summarize, **our paper already proposes a data-free approach that does not require an external dataset or pretrained generator, and it includes experimental results as well as a multi-faceted analysis covering privacy and computation aspects**. We hope that our comprehensive analysis of various scenarios will have a positive impact on your evaluation. Thank you.\"}", "{\"comment\": \"Thank you for acknowledging that our weighting method for federated ensemble distillation is a novel approach with theoretically guaranteed optimality. However, concerns regarding additional costs due to the use of GAN have been raised. As we have explained in detail to the first and third reviewers, we would like to reaffirm that these additional costs are not a significant concern.\\n\\nAs detailed in Appendix G and our most recent response to the first reviewer (**Gtch)**, **FedGO incurs only negligible additional computational and communication costs on the client side, even in a fully data-free setup**. Despite this minor overhead, FedGO leverages a theoretically guaranteed, provably near-optimal weighting approach, enabling it to achieve state-of-the-art performance in federated ensemble distillation. Furthermore, it achieves comparable performance with fewer communication rounds, reducing client-side computation, communication, and privacy overheads through faster convergence. \\n\\nWe respectfully assert that concerns about additional costs should not detract from the substantial contributions our work makes to the field of federated ensemble distillation.\"}", "{\"comment\": \"We have uploaded a revised version of the paper, incorporating additional experiments and analyses conducted during the rebuttal period. All changes from the initially submitted paper are highlighted in blue. A summary of major changes is provided below:\\n\\n1. **Additional Experiments with Alternative Architectures**\\n \\n **Classifier Architectures:** We conducted the main experiments using two different classifier architectures. The results demonstrate that FedGO consistently outperforms other baseline algorithms regardless of the model architecture. Details can be found in Appendix F.3.\\n \\n **Discriminator Architecture:** We assessed the impact of discriminator architectures on FedGO's performance. The results indicate that FedGO achieves similar final performance regardless of the discriminator architecture. These findings are summarized in Appendix F.6.\\n \\n2. **Analysis of Impact of Hyperparameters**\\n \\n **Server Model Training Epochs:** We evaluated the performance of FedGO with varying numbers of training epochs for the server model. The experiments show that FedGO achieves higher performance than baseline algorithms even with fewer server model training epochs. This analysis is detailed in Appendix F.5.\\n \\n **Learning Rate Decay:** We investigated the effect of learning rate decay during server model training. The results reveal that FedGO performs better without learning rate decay. This is also discussed in Appendix F.5.\\n \\n **Generator Training Epochs:** We examined the effect of varying the training epochs of the generator on FedGO's performance. Interestingly, even when using an untrained, randomly initialized generator, FedGO outperforms or matches the performance of baseline algorithms. Further details are available in Appendix F.6.\\n \\n3. **Additional Explanation on the Overhead of FedGO**\\n \\n We have provided a more detailed explanation emphasizing that the additional communication and computation overhead of our FedGO is minimal compared to previous federated ensemble distillation methods.\\n \\n\\nThese updates aim to further clarify and strengthen the findings of our work.\"}", "{\"comment\": \"2. **Quality of Discriminator and Generator**\\n\\nWe agree with the reviewer on the importance of analyzing the impact of the quality of the discriminator and generator. Note that we already reported the experimental results according to the discriminator quality in Appendix F.5 and Table 7 of the initially submitted paper. Thus, we focused on additional experiments to evaluate the impact of the generator quality.\\n\\nKeeping all other settings unchanged from our main setup, we measured the performance of our FedGO with varying generator training steps (originally 100,000) alongside baseline algorithms after 50 communication rounds. The results are summarized in the table below:\\n\\\\\\\\begin{array}{|l|c|c|ccccc|} \\\\\\\\hline & \\\\\\\\text{FedDF} & \\\\\\\\text{DaFKD} &&& \\\\\\\\textbf{FedGO (ours)}& \\\\\\\\\\\\\\\\\\n\\\\\\\\text{Generator Training Steps} & - & \\\\\\\\text{100,000 Steps} & \\\\\\\\text{0 Steps} & \\\\\\\\text{25,000 Steps} & \\\\\\\\text{50,000 Steps} & \\\\\\\\text{75,000 Steps} & \\\\\\\\text{100,000 Steps} \\\\\\\\\\\\\\\\ \\\\\\\\hline \\\\\\\\text{Server Test Accuracy} & 70.18 \\\\\\\\pm 2.56 & 71.42 \\\\\\\\pm 3.11 & 71.12 \\\\\\\\pm 2.07 & 76.74 \\\\\\\\pm 3.16 & 78.43 \\\\\\\\pm 0.99 & 78.89 \\\\\\\\pm 1.55 & 78.24 \\\\\\\\pm 1.61 \\\\\\\\\\\\\\\\\\n\\\\\\\\text{Ensemble Test Accuracy} & 73.55 \\\\\\\\pm 2.41 & 74.54 \\\\\\\\pm 2.80 & 74.88 \\\\\\\\pm 1.63 & 79.12 \\\\\\\\pm 1.97 & 80.72 \\\\\\\\pm 0.75 & 80.87 \\\\\\\\pm 0.98 & 80.82 \\\\\\\\pm 0.82 \\\\\\\\\\\\\\\\ \\\\\\\\hline \\\\\\\\end{array}\\nAs shown in the above table, FedGO with the generator trained for 25,000 steps performs better than that with the randomly initialized generator (0 steps), with little performance improvement beyond 25,000 steps. Remarkably, even a randomly initialized generator outperforms FedDF with uniform weighting and achieves performance comparable to DaFKD with a generator trained for 100,000 steps.\\n\\n3. **Impact of Hyperparameters**\\n\\nThanks for the comment. For the discriminator training epochs $E_d$, we already reported relevant experimental results in Appendix F.5 and Table 7 of the initially submitted paper. To address the reviewer\\u2019s suggestion, we additionally evaluated the impact of server epochs ($E_s$, set to 10 in the initially submitted paper) on FedGO\\u2019s performance after 100 communication rounds.\\n\\nAs shown in the above table, using 5 epochs outperforms 1 epoch, with minimal performance differences beyond 5 epochs. Notably, even with only 1 epoch, FedGO significantly outperforms all the baselines trained with 10 server epochs in the initially submitted paper (Table 2 of the paper).\\n\\\\\\\\begin{array}{|l|c|c|c|c|}\\n\\\\\\\\hline\\n & \\\\\\\\text{1 Epoch} & \\\\\\\\text{5 Epochs} & \\\\\\\\text{10 Epochs} & \\\\\\\\text{20 Epochs} \\\\\\\\\\\\\\\\\\n\\\\\\\\hline\\n\\\\\\\\text{Server Test Accuracy (\\\\\\\\%)} & 74.03 \\\\\\\\pm 6.41 & 79.06 \\\\\\\\pm 5.30 & 79.62 \\\\\\\\pm 4.36 & 78.32 \\\\\\\\pm 5.13 \\\\\\\\\\\\\\\\\\n\\\\\\\\text{Ensemble Test Accuracy (\\\\\\\\%)} & 77.16 \\\\\\\\pm 0.88 & 80.97 \\\\\\\\pm 0.87 & 81.56 \\\\\\\\pm 0.48 & 81.39 \\\\\\\\pm 0.75 \\\\\\\\\\\\\\\\\\n\\\\\\\\hline\\n\\\\\\\\end{array}\\nMoreover, we conducted an additional experiment to evaluate the impact of the server model\\u2019s learning rate decay on FedGO\\u2019s performance after 100 communication rounds. In the initially submitted paper, we used cosine learning rate decay by following the experimental setting of FedDF.\\n\\\\\\\\begin{array}{|l|ccc|}\\n\\\\\\\\hline\\n&& \\\\\\\\text{FedGO}&\\\\\\\\\\\\\\\\\\n& \\\\\\\\text{with LR decay}&&\\\\\\\\text{without LR decay}\\\\\\\\\\\\\\\\\\n\\\\\\\\hline\\n\\\\\\\\text{Server Test Accuracy} & 79.62 \\\\\\\\pm 4.36 && 80.18 \\\\\\\\pm 2.16 \\\\\\\\\\\\\\\\\\n\\\\\\\\text{Ensemble Test Accuracy} & 81.56 \\\\\\\\pm 0.48 && 85.20 \\\\\\\\pm 1.33 \\\\\\\\\\\\\\\\\\n\\\\\\\\hline\\n\\\\\\\\end{array}\\nAs shown in the above table, the absence of learning rate decay resulted in further performance improvement. Specifically, an ensemble test accuracy of 85.20% was achieved, which is comparable to the central training model\\u2019s accuracy of 85.33%, demonstrating the effectiveness of our provably near-optimal weighting method.\"}", "{\"comment\": \"We are glad that some of your concerns were resolved through our additional experiments and explanations, and we appreciate the opportunity to address your remaining questions about computational costs and privacy risks.\\n\\nFederated ensemble distillation has emerged as **a powerful approach for addressing challenges like data heterogeneity, with numerous studies presented at major conferences reflecting its value and relevance in the research community**. For example, FedDF is an early study on federated ensemble distillation, presented at NeurIPS 2020. It has been **cited over 1,000 times to date, reflecting the academic community's significant interest in and recognition of this methodology**. One central aspect of this framework is the requirement for a distillation dataset, which can either pre-exist on the server or be constructed dynamically using data-free methods. Both directions have been actively explored. \\n\\nIn our work, we propose a method that operates within the same federated ensemble distillation scenario as prior approaches. As demonstrated in Appendix G, FedGO introduces **only minimal computational overhead and privacy leakage on the client side,** compared to other federated ensemble distillation methods such as FedDF. With this minimal additional overhead, **our method benefits from a theoretically guaranteed, provably near-optimal weighting scheme, thereby achieving state-of-the-art performance in federated ensemble distillation**. Furthermore, this approach enables us to reach the same level of performance with fewer communication rounds, making our algorithm **more efficient in terms of client-side computational, communication, and privacy costs**, thanks to faster convergence.\\n\\nTherefore, we believe that **concerns regarding computational cost or privacy risks should not lead to an undervaluation of our work.** To do so would risk dismissing the broader progress made in federated ensemble distillation, which has been actively and extensively studied in the research community. We hope this context provides clarity and addresses any remaining concerns.\"}", "{\"metareview\": \"Summary: The paper introduces FedGO, a federated learning approach that uses Generative Adversarial Networks (GANs) to optimize client prediction weighting in the ensemble distillation process, aiming to improve robustness against data heterogeneity. The approach provides theoretical insights and shows improved performance and convergence speed in image classification tasks. However, concerns remain about the additional computational overhead and privacy implications of FedGO.\", \"strengths\": \"FedGO offers a novel integration of GAN techniques with federated ensemble distillation, potentially enhancing the training process of server models.\\n\\nThe paper provides a theoretical foundation for the method and demonstrates its effectiveness through extensive experiments.\", \"drawbacks\": \"The approach may introduce significant additional computational and communication overhead due to the need for discriminator training and uploading, which could be prohibitive in resource-constrained federated learning scenarios.\\n\\nThere are concerns about the privacy implications of FedGO, as it requires the upload of locally trained discriminators, increasing the risk of privacy leakage.\\n\\nThe effectiveness of FedGO relies heavily on the quality of the discriminator and generator, and the paper lacks ablation studies to analyze the impact of hyperparameters and the robustness of the method under different conditions.\\n\\nGiven the above points, I must reject this work due to concerns about its practical applicability, the potential increase in privacy risks, and the need for more comprehensive analysis to address the drawbacks identified.\", \"additional_comments_on_reviewer_discussion\": \"Concerns are not well-addressed.\"}", "{\"comment\": \"Thank you very much for taking the time to read and review our paper. In accordance with the reviewer\\u2019s comments, we conducted several additional experiments on CIFAR-10 with \\ud835\\udefc=0.1. The experimental results were obtained using five different random seeds, and the reported results are presented as the mean \\u00b1 standard deviation.\\n\\n1. **Different Model Structures**\\n\\nIn accordance with the reviewer\\u2019s suggestion, we conducted additional experiments using different model structures, which are VGG11 (with BatchNorm Layer) and ResNet-50. For VGG11, both the client and server models were trained using SGD with a learning rate of 0.01 and momentum of 0.9, and all the other settings including hyperparameters were kept identical to those in the initially submitted paper. We implemented VGG11 based on https://github.com/chengyangfu/pytorch-vgg-cifar10. For ResNet-50, all the settings including optimizer and hyperparameters were the same as the initially submitted paper. The table below presents the server test accuracy of central training, FedDF, and FedGO with the aforementioned model structures after 100 communication rounds.\\n\\\\\\\\begin{array}{|l|c|c|}\\n\\\\\\\\hline\\n& \\\\\\\\text{VGG11} & \\\\\\\\text{ResNet-50} \\\\\\\\\\\\\\\\\\n\\\\\\\\hline\\n\\\\\\\\text{Central training} & 83.27 \\\\\\\\pm 0.60 & 85.12 \\\\\\\\pm 0.44 \\\\\\\\\\\\\\\\\\n\\\\\\\\text{FedDF} & 68.59 \\\\\\\\pm 4.65 & 65.21 \\\\\\\\pm 4.62 \\\\\\\\\\\\\\\\\\n\\\\\\\\textbf{FedGO (ours)} & 72.53 \\\\\\\\pm 4.10 & 75.52 \\\\\\\\pm 4.30 \\\\\\\\\\\\\\\\\\n\\\\\\\\hline\\n\\\\\\\\end{array}\\nWe can see that our FedGO algorithm consistently achieves performance gains over FedDF across different model structures. We will update experimental results for an additional baseline algorithm, FedGKD$^+$, within this rebuttal period, and for all the other baseline algorithms in the final camera-ready version of the paper.\"}", "{\"summary\": \"This paper proposes FedGO: Ferated Ensemble Distillation with GAN-based Optimality, for federated ensemble distillation. This algorithm incorporates a novel weighting method using the client discriminators that are trained at the clients based on the generator distributed from the server and their own datasets. The generator distributed from the server can be either off-the-shelf or trained with the unlabeled dataset on the server. The exchange of the generator and the client discriminators between the server and the clients occurs only once before the main FL algorithm starts, resulting in minimal additional overhead.\\nExtensive experiments demonstrate significant improvements of FedEDG over existing research both in final performance and convergence speed on multiple image datasets.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The paper is well-written and easy to follow.\\n2. The authors conducted extensive experiments to verify the effectiveness of the proposed method.\", \"weaknesses\": \"As far as I am concerned, distillation-based FL is data dependent and requires access to an auxiliary dataset derived from publicly available proxy data sources for knowledge transfer, whereas a desirable auxiliary dataset is not always available since its construction requires careful deliberation and even prior knowledge about clients\\u2019 private data to achieve satisfactory performance, which is inconsistent with the privacy-preserving nature of FL. In addition, I argue that FedEDG with Pretrained Generator proposed in this paper also has the above-mentioned issues. This is because pre-trained generator needs to be trained on public datasets. Therefore, I remain skeptical of this research direction, even if the paper contains theoretical evidence. Furthermore, if the author wants to convince me, please provide some feasible solutions to address the aforementioned issues.\\nI'll raise my score if author can address the above problems.\", \"questions\": \"Please see Weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
4fJghLR3hk
Addressing Extrapolation Error in Multi-Agent Reinforcement Learning
[ "Yueheng Li", "Guangming Xie" ]
Cooperative Multi-Agent Reinforcement Learning (MARL) has become a critical tool for addressing complex real-world problems. However, scalability remains a significant challenge due to the exponentially growing joint action space. In our analysis, we highlight a critical but often overlooked issue: **extrapolation error**, which arises when unseen state-action pairs are inaccurately assigned unrealistic values, severely affecting performance. We demonstrate that the success of value factorization methods can be largely attributed to their ability to mitigate this error. Building on this insight, we introduce multi-step bootstrapping and ensemble techniques to further reduce extrapolation errors, showing that straightforward modifications can lead to substantial performance improvements. Our findings underscore the importance of recognizing extrapolation error in MARL and highlight the potential of exploring simpler methods to advance the field.
[ "cooperative multi-agent reinforcement learning", "CTDE", "value factorization", "extrapolation error" ]
Reject
https://openreview.net/pdf?id=4fJghLR3hk
https://openreview.net/forum?id=4fJghLR3hk
ICLR.cc/2025/Conference
2025
{ "note_id": [ "tegHIZFokm", "o628SjTjfF", "hxE0uXmqFV", "fNRIKgrQRn", "ZD6lmrxfGt", "WmNWb9tHU8", "SY30kVvjz4", "R48D0V0KT2", "Q8YI9biRjK", "LazaKZc8wn", "Io2Ti9Q2lB", "HmOBnT9CPM", "94O0YrR2Yq" ], "note_type": [ "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_review", "meta_review", "official_review", "official_comment" ], "note_created": [ 1732459165428, 1732089336057, 1730375314321, 1732091274165, 1732088418257, 1732089247934, 1737523714444, 1732555348350, 1732090100422, 1730646067385, 1734136072357, 1730083360841, 1732545007248 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission5578/Reviewer_nWE9" ], [ "ICLR.cc/2025/Conference/Submission5578/Authors" ], [ "ICLR.cc/2025/Conference/Submission5578/Reviewer_JVoU" ], [ "ICLR.cc/2025/Conference/Submission5578/Authors" ], [ "ICLR.cc/2025/Conference/Submission5578/Authors" ], [ "ICLR.cc/2025/Conference/Submission5578/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission5578/Reviewer_h49b" ], [ "ICLR.cc/2025/Conference/Submission5578/Authors" ], [ "ICLR.cc/2025/Conference/Submission5578/Reviewer_nWE9" ], [ "ICLR.cc/2025/Conference/Submission5578/Area_Chair_sGFo" ], [ "ICLR.cc/2025/Conference/Submission5578/Reviewer_h49b" ], [ "ICLR.cc/2025/Conference/Submission5578/Reviewer_JVoU" ] ], "structured_content_str": [ "{\"comment\": \"Thank you for your detailed and thoughtful response. I appreciate the effort you put into addressing my concerns and clarifying the contributions of your work. However, after considering your explanation, I still believe that the contributions are not sufficient to warrant acceptance. I'd like to maintain my score.\"}", "{\"comment\": \"> **8. There are no implementation details or parameter searches provided for the baseline methods. Only searching parameters for the proposed method is unfair and may lead to biased results.**\\n\\nThe baselines are implemented using fine-tuned versions from PyMARL2 (VDN, QMIX, QPLEX) and FACMAC\\u2019s paper (FACMAC, MADDPG). We did not perform parameter searches for any methods, including ours, as $\\\\lambda^*$ and $M$ were set heuristically.\\n\\n> **9. Section 3 provides a detailed analysis of QPLEX to illustrate extrapolation error in MARL. However, Section 4 switches to QMIX. Is there a specific reason for this switch?**\\n\\nWe chose QPLEX for theoretical analysis due to its explicit modeling of joint action spaces. QMIX was used in Section 4 because of its simplicity and popularity. However, the findings in Section 4 are applicable to other methods, including QPLEX.\\n\\n[1] Fujimoto, Scott, David Meger, and Doina Precup. \\\"Off-policy deep reinforcement learning without exploration.\\\" In International conference on machine learning, pp. 2052-2062. PMLR, 2019.\\n\\n[2] Anschel, Oron, Nir Baram, and Nahum Shimkin. \\\"Averaged-dqn: Variance reduction and stabilization for deep reinforcement learning.\\\" In International conference on machine learning, pp. 176-185. PMLR, 2017.\\n\\n[3] Rashid, Tabish, Mikayel Samvelyan, Christian Schroeder De Witt, Gregory Farquhar, Jakob Foerster, and Shimon Whiteson. \\\"Monotonic value function factorisation for deep multi-agent reinforcement learning.\\\" Journal of Machine Learning Research 21, no. 178 (2020): 1-51.\\n\\n[4] Kozuno, Tadashi, Yunhao Tang, Mark Rowland, R\\u00e9mi Munos, Steven Kapturowski, Will Dabney, Michal Valko, and David Abel. \\\"Revisiting Peng\\u2019s Q ($\\\\lambda$) for Modern Reinforcement Learning.\\\" In International Conference on Machine Learning, pp. 5794-5804. PMLR, 2021.\\n\\n[5] Hu, Jian, et al. \\\"Rethinking the Implementation Tricks and Monotonicity Constraint in Cooperative Multi-agent Reinforcement Learning.\\\" The Second Blogpost Track at ICLR 2023.\\n\\n[6] Peng, Bei, et al. \\\"Facmac: Factored multi-agent centralised policy gradients.\\\" Advances in Neural Information Processing Systems 34 (2021): 12208-12221.\"}", "{\"summary\": \"The authors discuss and provide an analysis on the extrapolation error in Multi-Agent Reinforcement Learning (MARL), and show that value factorisation methods, like QMIX, can help reduce this error. Furthermore, they propose two methods to reduce extrapolation error in MARL, specifically multi-step bootstrapping and using ensembled independent value functions. The authors show that these methods can improve the performance of QMIX, in SMAC, SMACv2 and Google Research Football (GRF) environments, and of on-policy MARL algorithms like MADDPG and FACMAC on SMAC.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The paper is well-written, clear and easy to follow.\", \"Extrapolation error, especially in online MARL, is a relatively unexplored area. This paper appears to be among the first to address this issue, providing both an analysis and methods to mitigate it.\", \"The paper provides a relevant discussion on the propagation of extrapolation error in MARL and how value factorization methods can help reduce it. Building on this analysis, the authors introduce targeted modifications to reduce the bias and variance associated with extrapolation error, with results showing consistent performance improvements across different environments and algorithms. Additionally, ablation studies on ensemble size and the $\\\\lambda$ annealing parameter are included.\"], \"weaknesses\": [\"The experiment section is not very detailed. The authors should provide more information on the experimental setup, including how many seeds were run, the evaluation procedure and the evaluation interval. Furthermore, the main results presented in Table 1 don't include the standard deviation, which is important to understand the significance of the results.\", \"Although the authors provide results in three environments, two of them are SMAC and SMACv2, which might share some similarities. It might be more informative to use a different environment to SMACv1.\", \"It is unclear if parameter sharing is used in the baseline algorithms. If it is, then the proposed ensemble method would result in many more learnable parameters. This could be a potential source of the improvement in the results, especially since when using smaller ensembles ($M=1,2$) in Figure 5b, performance is worse than the QMIX baseline. It would be important to disentangle the effect of increased capacity and extrapolation error mitigation.\"], \"questions\": \"1. Was parameter sharing used in the baselines?\\n2. What is the comparison of the parameter counts across the baseline and proposed modifications? How does this scale as the number of agents increase?\\n3. Could the authors specify the experiment details as discussed in the weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We thank the reviewer for their thoughtful feedback. We greatly appreciate the recognition of our paper\\u2019s insightful theoretical analysis, the practicality of our proposed methods, and the contribution to mitigating extrapolation errors in MARL. These strengths highlight the significance of our work in providing a deeper understanding of stable value estimation and its role in improving MARL performance and stability.\\n\\n> **1. Fundamental Limitations of Value Factorization.**\\n\\nWe acknowledge the fundamental limitations of value factorization in its reliance on a factorized action space. However, addressing this issue comprehensively has remained a challenge for years and is **not the primary focus of our paper**. Despite its potential suboptimality, value factorization consistently demonstrates superior performance compared to other baselines. Our objective is to investigate **why it performs well and identify ways to further exploit this advantage**.\", \"our_findings_provide_insights_that\": [\"**Support Value Factorization in addressing joint action dependencies**: We show that ignoring extrapolation errors during attempts to capture the full joint action space can result in significant error accumulation. For example, our modified QPLEX* improves upon QPLEX by considering joint action spaces while accounting for extrapolation errors. This enhances stability without compromising performance.\", \"**Generalize beyond Value Factorization**: The insights derived from our study can be directly applied to methods like MADDPG, which do not suffer from suboptimality issues, or inspire improvements in these methods.\", \"> **2. Incremental and Limited MARL-Specific Solutions.**\", \"We emphasize that the primary goal of our paper is to build a _theoretical foundation_ for understanding and mitigating extrapolation errors in MARL, rather than introducing purely novel methods. To this end, we deliberately use **well-established techniques** with **clear theoretical properties** to validate our insights. We believe that _addressing a critical, overlooked issue using simple and effective approaches represents a meaningful contribution to the field_.\", \"While we acknowledge that our methods do not explicitly consider agent interactions, we do offer **MARL-specific considerations**:\", \"We introduced QPLEX*, which bounds QPLEX's $\\\\lambda(s,a)$ to avoid extrapolation error accumulation and improves the stability.\", \"We find that starting with a large $\\\\lambda$ in PQL, while being unusual in single-agent settings due to suboptimality, actually benefits MARL due to the extrapolation error.\", \"In line 406-408, we opt not to ensemble the mixing network since it is unrelated to extrapolation, as verified in Figure 11.\", \"In line 408-412, we avoid averaging before the mixing network to implicitly regularize it, as verified in Figure 15.\", \"> **3. Addressing Suboptimality and Agent Dependencies**\", \"As noted above, our method does not directly address the suboptimality inherent in value factorization caused by its inability to fully capture joint action interactions. However, our contributions can:\", \"**Facilitate future developments in value factorization methods**: Previous attempts to address suboptimality, such as QTRAN and QPLEX, struggled with extrapolation errors, which we mitigate through our approach. Our insights can guide the development of methods that aim to overcome suboptimality while retaining stability.\", \"**Benefit MARL methods without suboptimality issues**: Since our study addresses extrapolation errors\\u2014a common challenge across MARL methods\\u2014it can directly apply to or inspire enhancements in approaches that do not rely on value factorization.\", \"We hope these responses clarify our contributions and demonstrate how our work can drive progress in MARL research.\"]}", "{\"comment\": \"Thanks for your review. We answer this question first, which helps with the subsequent rebuttal\\n> **How can we ensure that extrapolation error is reduced?**\", \"we_approach_this_from_both_theoretical_and_empirical_perspectives\": \"**Theoretical Perspective**: Extrapolation error arises when the target Q-function relies on values from rarely seen actions. By reducing the usage of such values, we can mitigate extrapolation error. For example: Factorized Q-functions operate in a much smaller action space compared to joint Q-functions. With the same sample size, factorized Q-functions are less likely to use extrapolated Q-values, reducing the extrapolation error. This theoretical property highlights why value factorization methods effectively mitigate extrapolation error.\\n\\n**Empirical Perspective**:\\nWhile extrapolation error **cannot be directly measured** due to its integration within the total neural network error, we use Target Estimation Error (TEE), as introduced in our paper, as an indirect indicator.\\nFor example, since value factorization theoretically reduces extrapolation error, its impact on TEE provides indirect evidence.\\nAs shown in figure 1(b)(c), TEE is significantly reduced. Although value factorization introduces limitations in function approximation, which should increase TEE, the observed reduction in TEE confirms that the theoretical and empirical results mutually validate each other.\\n\\n> **1. Extrapolation error is a commonly discussed topic in single-agent RL and naturally extends to MARL.**\\n\\nWe\\u2019d like to emphasize that it is not straightforward to consider extrapolation error in **online MARL**. Unlike offline single/multi-agent RL, where extrapolation error has been studied extensively, it is rarely considered in online single-agent RL and has not been addressed in online MARL prior to our work. The unique challenge in MARL is its large joint action space, which amplifies extrapolation error. This is discussed in Section 3.1 of our paper.\\n\\n> **2. The authors do not provide new insights or discuss challenges specific to MARL.**\", \"we_respectfully_argue_that_our_paper_introduces_several_new_insights_and_all_specific_to_marl\": [\"Online MARL suffers from exacerbated extrapolation error due to large joint action spaces.\", \"Value factorization methods effectively mitigate extrapolation error by addressing joint action space challenges.\", \"Monotonic factorization is crucial to prevent the accumulation of extrapolation error.\", \"Performance in existing MARL methods is heavily influenced by extrapolation error.\", \"None of these points can be directly derived from prior works [1,2,3,4], demonstrating that our findings are novel and specific to MARL.\", \"> **3. The paper claims that extrapolation error is a major issue in MARL, but the authors do not provide any evidence to support this claim.**\", \"Here, we summarize the evidence provided based on the 4 points mentioned above, respectively.\", \"In Figure 1(a), we conduct experiments on SMAC and find 20%-60% of the target estimation relies on extrapolated values, directly highlighting the importance of extrapolation error in MARL.\", \"As discussed at the beginning, value factorization reduces extrapolation error by limiting the use of extrapolated Q-values. This theoretical property is empirically supported in Figures 1(b)(c), where TEE decreases significantly.\", \"Section 3.2 provides theoretical analysis showing that monotonic factorization is critical to prevent extrapolation error accumulation. Although experiments specific to this are not included, prior works [5,6] on non-monotonic factorization support our claim.\", \"In Section 3.3, we show how extrapolation error destabilizes QPLEX. Furthermore, Section 5 and the appendix demonstrate that methods such as VDN, QMIX, QPLEX, FACMAC, and MADDPG show substantial performance improvement when extrapolation error is addressed.\", \"> **4. Line 352 states, \\\"The behavior policy typically originates from the old policy stored in the replay buffer, which may not align closely with the current policy after convergence\\\". Does this not hold even after convergence? Why?**\", \"Sorry for this mistake. We meant \\u201cbefore convergence\\u201d rather than \\\"after convergence.\\\"\"]}", "{\"comment\": \">**5. Line 294 states, \\\"While the mean and standard deviation of $\\\\lambda$ remain small, the maximum value of $\\\\lambda$ grows significantly as training progresses, eventually leading to performance degradation.\\\"**\\n\\nAs shown in Figure 2, $\\\\lambda_\\\\max$ significantly increase from 0 to 15 during the first 5M steps while $\\\\lambda_{mean}$ and $\\\\lambda\\\\_{std}$ remain below 0.5. This trend indicates that large $\\\\lambda$ values only appear for a small subset of joint actions. This behavior aligns with extrapolation error: errors accumulate on these rarely updated joint actions, ultimately causing instability in QPLEX\\u2019s training at around 5M steps.\\n\\nTo confirm that the observed growth of $\\\\lambda_\\\\max$ is problematic, we introduced QPLEX*, which directly bounds $\\\\lambda$ to the range [0,1]. QPLEX* achieves the same performance without the instability observed in standard QPLEX. This result validates that the unbounded growth of $\\\\lambda$ is the root cause of the performance degradation.\\nThe subsequent decline of $\\\\lambda_\\\\max$ after 5M steps is irrelevant, as the training has already crashed at that point.\\n\\n>**6. Too many results are placed in the appendix but are referenced in the main text, especially since some claims are based on the appendix (e.g., lines 355, 407, 411, and 430).**\\n\\nWe acknowledge that referencing results in the appendix can hinder readability. For the next version, we will:\\n- Relocate key results from the appendix (e.g., those referenced in lines 355 and 430) to the main text.\\n- Clarify that implementation details (e.g., those referenced in lines 407 and 411), while important, are secondary to the main results and thus remain in the appendix.\\n\\n> **7. We address the following concerns collectively:**\\n>- The two proposed techniques are for bias/variance reduction, which do not seem to be directly related to extrapolation error. \\n>- There is no evidence that the proposed method mitigates extrapolation error, thus leading to better performance. \\n>- Since the paper discusses extrapolation error in MARL, can you provide results demonstrating that your method mitigates extrapolation error compared to the baselines?\\n>- Since the Target Estimation Error (TEE) can be influenced by issues such as overestimation and extrapolation errors, how can you ensure that the issue is indeed extrapolation error due to unseen state-action values backpropagating rather than overestimation due to the max operator [1]?\\n\\n**Do the proposed methods reduce extrapolation error?**\\nBoth techniques\\u2014PQL and ensemble methods\\u2014directly address extrapolation error:\\n- PQL: Reduces extrapolation error by assigning lower weights to target Q-values that may rely on extrapolated values.\\n- Ensemble: By lowering the variance of the target Q-function, ensemble methods reduce extrapolation error as part of the total error reduction.\\n\\nThis leads us to investigate whether the performance improvements stem from **extrapolation error reduction** or **other errors** addressed by the proposed methods. Since extrapolation error cannot be directly measured, we rely on TEE, which mainly captures both extrapolation and overestimation errors. Figure 4 shows that TEE is significantly reduced with our methods. To validate that this reduction and performance improvements stems from extrapolation error:\\n- **Ruling out Overestimation**: Appendix E3 (Figure 12) shows that introducing a more conservative target (to reduce overestimation) negatively impacts performance. This demonstrates that overestimation is not a major factor for the tested methods, confirming that the observed performance gain arises from mitigating extrapolation error.\\n- **Testing Mixing Networks**: Ensemble methods applied to the mixing network reduce general error, but they do not improve performance (Figure 11). This is because extrapolation error arises from the action space, which the mixing network does not influence.\\n- **Structural Bias vs. Extrapolation Error**: While PQL may also reduce structural bias in value factorization methods, its effectiveness is not limited to such cases. Both QPLEX and MADDPG\\u2014methods without structural bias\\u2014benefit significantly from our techniques. This supports the conclusion that the observed performance gains primarily result from reduced extrapolation error.\\n\\nWe used complementary approaches to isolate the impact of extrapolation error on performance. The results, supported by both **theoretical analysis** and **empirical validation**, demonstrate that our methods effectively reduce extrapolation error and significantly enhance performance. We believe that there are no other significant issues that may affect performance.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"Thank you for your response. However, my concerns remain unaddressed. I will keep my score unchanged.\\n\\n1. Novelty. Although the authors argue that their paper introduces several new insights, I still find these insights to be obvious for me and are from the literature I mentioned.\\n\\n2. Evidence of extrapolation error. It is still not convincing that extrapolation error is the main reason for the performance gap. The error definition in eq.(3) is the same as the one in [1], except for the name change from \\\"overestimation error\\\" to \\\"TEE\\\". I do not see evidence in the analysis that extrapolation error is the main reason for the performance gap. \\n\\n3. Experiments. The authors mentioned that they did not tune the hyperparameters for baselines and the proposed method. This is not a good practice, and the sensitivity of the proposed method to hyperparameters is unclear. I recommend tuning the hyperparameters for all methods and reporting the results in the next version, at least for common hyperparameters like learning rate and the specific hyperparameters for the proposed method.\\n\\n[1] Anschel, Oron, Nir Baram, and Nahum Shimkin. \\\"Averaged-dqn: Variance reduction and stabilization for deep reinforcement learning.\\\" In International conference on machine learning, pp. 176-185. PMLR, 2017.\"}", "{\"comment\": \"We sincerely thank the reviewer for their detailed and constructive feedback. We are also grateful for the recognition of the strengths of our work, including the clarity of the paper, our novel focus on extrapolation error in online MARL, and the thorough analysis and methods we introduced to mitigate this issue.\\n\\n> **1. The experiment section is not very detailed.**\\n\\nWe appreciate the reviewer\\u2019s concern about the level of detail in the experimental section. The number of seeds is detailed in Figures 8, 9, and 10 in the appendix, where SMAC and SMACv2 use 3 seeds, and GRF uses 5 seeds.\", \"the_evaluation_procedure_follows_the_standard_setup_for_smac\": \"training pauses every $10^4$ steps, and the agent is evaluated for 32 episodes using greedy action selection.\\n\\nRegarding the omission of standard deviations in Table 1, this decision was made to save space. However, the learning curves in Figures 8, 9, and 10 in the appendix provide a detailed view of performance, including error bar, which we believe addresses this concern.\\n\\n> **2. It might be more informative to use a different environment to SMACv1.**\\n\\nThank you for the suggestion. We are currently conducting experiments on several tasks from PettingZoo to expand the diversity of environments. Unfortunately, due to the time required for tuning and running baselines, we are unable to provide these results during the rebuttal phase. We appreciate your understanding.\\n\\n> **3. It is unclear if parameter sharing is used in the baseline algorithms.**\\n\\nWe confirm that all baseline algorithms (except for MADDPG) utilize parameter sharing across agents in our experiments.\\n\\n> **4. Why using smaller ensembles (M=1,2) in Figure 5b, performance is worse than the QMIX baseline.**\\n\\nIt is due to the annealing of $\\\\lambda$ occurring too quickly, before convergence is achieved, and not because of the ensemble itself.\\nAs discussed in Lines 510\\u2013515, the annealing approach is designed for quicker convergence. However, as shown in Figure 5(c), smaller $\\\\lambda$ leads to poor performance on the 3s5z_vs_3s6z task due to insufficient convergence. Premature annealing of $\\\\lambda$ negatively impacts performance in these cases. In contrast, larger ensembles achieve convergence earlier, making $\\\\lambda$ -annealing more effective.\\nAdditionally, Figure 5(a) demonstrates that smaller ensembles (M=2) still significantly improve performance compared to the baseline.\\n\\n> **5. It would be important to disentangle the effect of increased capacity and extrapolation error mitigation.**\\n\\nThank you for this valuable suggestion. To address this concern, we conducted experiments with larger QMIX models on SMACv2 by increasing the hidden dimensions of the individual Q-function from 64 to 128, and 256. The results (Figure 16) indicate that simply increasing model capacity does not improve performance, confirming that the observed improvements are not due to increased capacity.\\n\\n> **6. What is the comparison of the parameter counts across the baseline and proposed modifications? How does this scale as the number of agents increase?**\\n\\nThe primary increase in parameter count arises from the ensemble modification.\\n\\n- In QMIX, the total parameter count is _individual_Q_parameters + mixer_parameters_. With an ensemble of size M, this scales to _M*individual_Q_parameters + mixer_parameters_.\\n- For policy-based methods like FACMAC, the baseline parameters include _individual_Q_parameters + individual_policy_parameters + mixer_parameters_, which scale to _M\\u00d7individual_Q_parameters + individual_policy_parameters + mixer_parameters_ with an ensemble of size M. \\n\\nThe parameter count depends on the size of each component and the state-action space of the task. For example, in the _corridor_ task of SMAC, QMIX\\u2019s individual Q-parameters total 39k, while the mixer parameters are 69k. Importantly, the parameter count does not scale with the number of agents due to parameter sharing, except in methods like QPLEX and MADDPG, which take joint actions as input.\"}", "{\"summary\": \"This paper addresses the challenge of extrapolation errors in multi-agent reinforcement learning (MARL), focusing on the issue caused by the large joint action space. To mitigate these issues, the authors propose the application of modified multi-step bootstrapping and ensemble TD target techniques, aiming to enhance learning stability and reduce prediction variance. These proposed solutions are supported by theoretical analysis that explains the propagation of extrapolation errors and the importance of ensuring consistent value estimation. Empirical results validate these approaches, demonstrating that they contribute to improved performance and more stable training in various MARL scenarios.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"Insightful Theoretical Analysis: The theoretical framework helps illustrate the propagation of extrapolation errors and lays a foundation for understanding the importance of stable value estimation in MARL.\", \"The proposed methods, including multi-step bootstrapping and ensemble TD targets, are backed by experiments showing improved performance and stability over baseline approaches in MARL settings, demonstrating their utility in practice.\", \"The paper highlights the extrapolation errors in MARL and proposes practical solutions to mitigate this challenge, contributing to a better understanding and partial resolution of this important problem.\"], \"weaknesses\": [\"Fundamental Limitations of Value Factorization: Although the paper claims that the success of value factorization methods is largely due to their ability to mitigate extrapolation errors (as noted in the abstract), this mitigation is not comprehensive. The approach simplifies the estimation by focusing on local utilities, but it may fail to capture the full complexity of joint action spaces. An agent\\u2019s action value can vary significantly when combined with other agents\\u2019 actions, leading to potential suboptimal solutions. While this method improves learning stability, as further discussed in Sections 3.1 and 3.2, it does not fully address the diverse combinations and dependencies between agents, which are critical for optimal policy learning in MARL.\", \"Incremental and Limited MARL-Specific Solutions: The proposed methods, while addressing the large joint action space, primarily adapt existing techniques like multi-step bootstrapping and ensemble TD targets. These approaches lack innovation and do not sufficiently consider agent interactions, a key aspect of MARL. This results in simplified solutions that may fall short in effectively handling complex, cooperative scenarios, limiting their overall impact and applicability.\"], \"questions\": \"While the paper notes that value factorization mitigates extrapolation errors, how does the method address the potential suboptimality caused by not fully capturing the complexity of joint action interactions among agents? Are there plans to extend the method to better account for agent dependencies and interaction effects?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This paper is about reducing extrapolation error in team-reward cooperative MARL setting as captured by SMAC and similar environments. Extrapolation error is defined as arising from states not encountered during training receiving unrealistic values after learning. Reviewers felt that the work was incremental, and not novel enough, especially that multi-agent aspects of the setting (interactions between agents) were not really addressed in a way that pushed things forward beyond what was already done in the large literature which now exists on cooperative MARL.\", \"additional_comments_on_reviewer_discussion\": \"All reviewers responded to say that they were not convinced by the authors' replies to them.\\n\\nThe authors sent me a message to complain about the reviewers not changing their scores, but I don't find it convincing. The reviewers' comments look much more plausible to me than the strange things the authors claim in their message: that this paper with MARL in the title really wasn't meant to be about MARL? I doubt it.\"}", "{\"summary\": \"This paper discusses extrapolation error in multi-agent reinforcement learning (MARL). The authors show that extrapolation error is a critical issue in MARL, affecting performance due to propagation from unseen state-action pairs, especially when the action space is large, as is often the case in MARL. Instead of proposing a new algorithm, the authors introduce two existing techniques, annealed multi-step bootstrapping and ensembled TD targets, to mitigate extrapolation error. The proposed method is tested across three domains: SMAC, GRF and SMACv2. The results show that the two simple modifications to existing methods can lead to significant performance improvements.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": [\"Extrapolation error in MARL is a natural extension of the single-agent case to the multi-agent case, which is reasonable.\", \"The improved method is tested across three domains on numerous maps, which is commendable.\"], \"weaknesses\": [\"Lack of novelty. Although the paper does not introduce new techniques or methods, I would not consider this a lack of novelty. The lack of novelty in this paper lies in its discussion of extrapolation error, which does not offer anything new. Extrapolation error is a commonly discussed topic in single-agent RL and naturally extends to MARL, which is acceptable. However, the authors do not provide new insights or discussions about challenges specific to MARL. Most of the content is similar to the single-agent case. It feels more like stitching together existing works [1,2,3,4] rather than proposing a new perspective or addressing a new issue induced by the multi-agent setting.\", \"The writing needs improvement. The core idea is simple and natural, but the logic is messy. There are many statements that are too subjective without any evidence, making the paper less convincing. For example, line 352 states, \\\"The behavior policy typically originates from the old policy stored in the replay buffer, which may not align closely with the current policy after convergence\\\". Does this not hold even after convergence? Why? Additionally, some conclusions are not consistent with the results. For example, line 294 states, \\\"While the mean and standard deviation of $\\\\lambda$ remain small, the maximum value of $\\\\lambda$ grows significantly as training progresses, eventually leading to performance degradation.\\\" However, Fig 2 shows that $\\\\lambda_{max}$ decreases over time, and the performance increases when $\\\\lambda_{max}$ increases.\", \"Too many results are placed in the appendix but are referenced in the main text, especially since some claims are based on the appendix (e.g., lines 355, 407, 411, and 430). This affects the readability of the paper.\", \"The paper claims that extrapolation error is a major issue in MARL, but the authors do not provide any evidence to support this claim. The two proposed techniques are for bias/variance reduction, which do not seem to be directly related to extrapolation error. There is no evidence that the proposed method mitigates extrapolation error, thus leading to better performance.\", \"There are no implementation details or parameter searches provided for the baseline methods. Only searching parameters for the proposed method is unfair and may lead to biased results.\", \"Minor issue. Some learning curves are missing in the last column of Appendix Figure 10.\", \"[1] Fujimoto, Scott, David Meger, and Doina Precup. \\\"Off-policy deep reinforcement learning without exploration.\\\" In International conference on machine learning, pp. 2052-2062. PMLR, 2019.\", \"[2] Anschel, Oron, Nir Baram, and Nahum Shimkin. \\\"Averaged-dqn: Variance reduction and stabilization for deep reinforcement learning.\\\" In International conference on machine learning, pp. 176-185. PMLR, 2017.\", \"[3] Rashid, Tabish, Mikayel Samvelyan, Christian Schroeder De Witt, Gregory Farquhar, Jakob Foerster, and Shimon Whiteson. \\\"Monotonic value function factorisation for deep multi-agent reinforcement learning.\\\" Journal of Machine Learning Research 21, no. 178 (2020): 1-51.\", \"[4] Kozuno, Tadashi, Yunhao Tang, Mark Rowland, R\\u00e9mi Munos, Steven Kapturowski, Will Dabney, Michal Valko, and David Abel. \\\"Revisiting Peng\\u2019s Q ($\\\\lambda$) for Modern Reinforcement Learning.\\\" In International Conference on Machine Learning, pp. 5794-5804. PMLR, 2021.\"], \"questions\": [\"Since the paper discusses extrapolation error in MARL, can you provide results demonstrating that your method mitigates extrapolation error compared to the baselines?\", \"Since the Target Estimation Error (TEE) can be influenced by issues such as overestimation and extrapolation errors, how can you ensure that the issue is indeed extrapolation error due to unseen state-action values backpropagating rather than overestimation due to the max operator [1]?\", \"Section 3 provides a detailed analysis of QPLEX to illustrate extrapolation error in MARL. However, Section 4 switches to QMIX. Is there a specific reason for this switch?\", \"See Weaknesses.\", \"[1] Anschel, Oron, Nir Baram, and Nahum Shimkin. \\\"Averaged-dqn: Variance reduction and stabilization for deep reinforcement learning.\\\" In International conference on machine learning, pp. 176-185. PMLR, 2017.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you to the authors for their responses and clarifications. While the rebuttal addresses some concerns and strengthens the paper, I will maintain my initial score and still recommend a weak accept.\"}" ] }
4f4HDfbwY5
CPDD: Generalized Compressed Representation for Multivariate Long-term Time Series Generation
[ "Weijian Li", "Shijie Li", "Huaiguang Jiang" ]
The generation of time series has increasingly wide applications in many fields, such as electricity and energy. Generating realistic multivariate long time series is a crucial step towards making time series generative models practical, with the challenge being the balance between long-term dependencies and short-term feature learning. Towards this end, we propose a novel time series generative model named Compressed Patch Denoising Diffusion-model (CPDD). Concretely, CPDD first employs the Time-series Patch Compressed (TPC) module based on the patch mode decomposition method to obtain the latent encoding of multi-scale feature fusion. Subsequently, it utilizes a diffusion-based model to learn the latent distribution and decode the resulting samples, thereby achieving high-quality multivariate long-time series generation. Through extensive experiments, results show that CPDD achieves state-of-the-art performance in the generation task of multivariate long-time series. Furthermore, TPC also exhibits remarkable efficiency in terms of robustness and generalization in time series reconstruction.
[ "Generative Model", "Deep Learning", "Mode Function", "Diffusion Model", "Long-term Time Series" ]
Reject
https://openreview.net/pdf?id=4f4HDfbwY5
https://openreview.net/forum?id=4f4HDfbwY5
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zBJwQMXneh", "z60rfunKon", "vMlH0iF0St", "tGLMVhyVHG", "s8izqbgsQz", "nzYRGdoF1i", "i8klUM9EVe", "e5py7oKeLJ", "XKLdkFluQp", "XIkoVPMrLq", "WYLTFjP3xh", "UaXWNspCsl", "Rk5PYA4tIE", "OanYjTWjDi", "MGaLGp84ef", "K6BXCGS3TN", "GL0LApS1A5", "FgGME4t3KD", "FALnVeSN0f", "DWmwWbqq2h", "CqRMr0wMka", "B9WRVEMsHm", "8nCbKmm11J", "7qXxlkRcUe", "1cSvMdOhHW", "0cpp6lXBDk" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "meta_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_review", "official_comment" ], "note_created": [ 1732260367258, 1732265761997, 1732262542624, 1732697038861, 1729734176411, 1732258906767, 1732268021499, 1732262518007, 1732447346193, 1732263688571, 1732267515216, 1730224815904, 1734755661728, 1730192517089, 1732717520140, 1732602099109, 1732268066650, 1732265811131, 1732258956886, 1732261706024, 1732686198597, 1737524075529, 1732266657144, 1732675450504, 1730715931499, 1732713724851 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission10767/Authors" ], [ "ICLR.cc/2025/Conference/Submission10767/Authors" ], [ "ICLR.cc/2025/Conference/Submission10767/Authors" ], [ "ICLR.cc/2025/Conference/Submission10767/Authors" ], [ "ICLR.cc/2025/Conference/Submission10767/Reviewer_fwsP" ], [ "ICLR.cc/2025/Conference/Submission10767/Authors" ], [ "ICLR.cc/2025/Conference/Submission10767/Authors" ], [ "ICLR.cc/2025/Conference/Submission10767/Authors" ], [ "ICLR.cc/2025/Conference/Submission10767/Reviewer_tQBV" ], [ "ICLR.cc/2025/Conference/Submission10767/Authors" ], [ "ICLR.cc/2025/Conference/Submission10767/Authors" ], [ "ICLR.cc/2025/Conference/Submission10767/Reviewer_F7YS" ], [ "ICLR.cc/2025/Conference/Submission10767/Area_Chair_qGEg" ], [ "ICLR.cc/2025/Conference/Submission10767/Reviewer_afTP" ], [ "ICLR.cc/2025/Conference/Submission10767/Reviewer_tQBV" ], [ "ICLR.cc/2025/Conference/Submission10767/Authors" ], [ "ICLR.cc/2025/Conference/Submission10767/Authors" ], [ "ICLR.cc/2025/Conference/Submission10767/Authors" ], [ "ICLR.cc/2025/Conference/Submission10767/Authors" ], [ "ICLR.cc/2025/Conference/Submission10767/Authors" ], [ "ICLR.cc/2025/Conference/Submission10767/Reviewer_afTP" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission10767/Authors" ], [ "ICLR.cc/2025/Conference/Submission10767/Reviewer_fwsP" ], [ "ICLR.cc/2025/Conference/Submission10767/Reviewer_tQBV" ], [ "ICLR.cc/2025/Conference/Submission10767/Authors" ] ], "structured_content_str": [ "{\"title\": \"Response to Reviewer F7YS\", \"comment\": \"We are grateful to the reviewer for their detailed and insightful comments, which have provided us with an opportunity to enhance the presentation and technical soundness of our paper. We greatly value the reviewers\\u2019 thoughtful comments, which have guided us in improving our work. To address these concerns, we provide detailed responses to each point below.\\n\\n>**W 1**: The evaluation experiments presented in the paper are insufficient to convincingly demonstrate the effectiveness of the proposed method. Specifically, more common used evaluation metrics need to be added (like MSE, MAE, .etc), and the selection of baseline methods (both the diffusion-based methods and the transformer-based methods should be compared).\\n\\nWe thank the reviewer's insightful suggestion. In the first point, we address the evaluation experiments question, and in the second point, we discuss the baseline methods.\\n1. In generation tasks, there are usually multiple valid outputs, but MAE and MSE calculations are based on point-to-point errors and do not account for the diversity or semantic similarity between the generated results and the reference targets. We have added the Context-FID[1] and Correlational[2] evaluation metrics in the table below, which are more suitable for evaluating generative tasks as they account for the semantic consistency and distributional similarity of the generated outputs in the table below. Context-FID evaluates the overall quality of the generated time series by extracting features using a pre-trained embedding model and measuring the distributional differences between generated and real sequences in the embedding space. Correlational assesses whether the generated data preserves the statistical structure and dependency patterns of the time series by comparing the auto-correlation and cross-correlation distributions with those of the real data.\\n\\n**Context-FID Scores\\u2193**:\\n| Model | Sines | Electricity | ETTh1 | Energy |\\n|---------------|----------------|-----------------|----------------|-----------------:|\\n| CPDD | **5.687\\u00b1.252** | 5.996\\u00b1.294 | **3.238\\u00b1.421** | **1.151\\u00b11.072** |\\n| Diffusion-TS | 13.451\\u00b1.492 | 65.204\\u00b11.853 | 20.568\\u00b12.973 | 67.630\\u00b17.732 |\\n| Timegan | 59.031\\u00b12.223 | 18.365\\u00b1.613 | 10.381\\u00b11.227 | 61.022\\u00b11.893 |\\n| Timevae | 106.981\\u00b1.722 | **4.804\\u00b1.436** | 3.362\\u00b1.432 | 19.862\\u00b1.038 |\\n\\n**Correlational Scores\\u2193**\\uff1a\\n| Model | Sines | Electricity | ETTh1 | Energy |\\n|---------------|----------------|-----------------|----------------|-----------------:|\\n| CPDD | 0.329\\u00b1.005 | 0.198\\u00b1.001 | 0.247\\u00b1.003 | **1.912\\u00b1.001** |\\n| Diffusion-TS | **0.194\\u00b1.003** | 0.213\\u00b1.001 | 0.430\\u00b1.008 | 7.271\\u00b1.007 |\\n| Timegan | 1.392\\u00b1.003 | 0.726\\u00b1.002 | 1.811\\u00b1.001 | 15.581\\u00b1.003 |\\n| Timevae | 4.263\\u00b1.001 | **0.086\\u00b1.001** | **0.155\\u00b1.004** | 2.484\\u00b1.004 |\\n\\nOverall, CPDD demonstrates excellent generation performance, excelling in both Context-FID and Correlational metrics, while also maintaining a good balance between Context consistency and multivariate time series correlation.\\n\\n2. In the experiment of the paper, we have compared Diffusion-TS, which is based on the Diffusion model and Transformer block. We acknowledge the importance of comparing more Diffusion-based and Transformer-based methods. Nevertheless, we are still in the process of conducting additional baseline tests due to time limitations. We will expedite the completion of all baseline tests and promptly provide updates in the comments section or appendix of the paper.\"}", "{\"title\": \"Response to Reviewer afTP\", \"comment\": \">**W 4**: The entire CPDD process is quite confusing. please provide a specific implementation process or corresponding pseudocode?\\n\\nWe sincerely apologize for not prominently providing an overall training process. The pseudo-code for the training process of the two stages of CPDD is presented as:\\n\\n| **Algorithm: Two-Stage Training Framework for Time-series Patch Compressed (TPC) Module and Diffusion Module** |\\n|--------------------------------------------------------------------|\\n\\n### **Stage I: TPC Pretraining**\\n1. **Encode input**: \\n $ Z \\\\leftarrow TPCEncoder(X) $\\n2. **Decode reconstruction**: \\n $ \\\\hat{X} \\\\leftarrow TPCDecoder(Z) $\\n3. **Compute reconstruction loss**: \\n $ \\n \\\\text{Loss} = || X - \\\\hat{X} ||^2\\n $\\n4. **Optimize TPCEncoder and TPCDecoder parameters using gradient descent.**\\n\\n---------------------------------------------------\\n\\n### **Stage II: Diffusion Training**\\n1. **Initialization**: Freeze TPCEncoder after pretraining: \\n $ Z_0 \\\\leftarrow TPCEncoder_{\\\\text{freeze}}(X) $\\n3. **Generate noisy latent representation**: \\n \\n $Z_t = \\\\sqrt{\\\\bar{\\\\alpha}_t} Z_0 + \\\\sqrt{1 - \\\\bar{\\\\alpha}_t} \\\\varepsilon,$\\n\\nwhere $\\\\bar{\\\\alpha}_t = \\\\alpha_1 \\\\alpha_2 \\\\dots \\\\alpha_t, \\\\quad \\\\alpha_t = 1 - \\\\beta_t $. \\n\\n4. **Predict original latent representation**: \\n $ \\\\hat{Z}_0 \\\\leftarrow Diffusion(Z_t, t) $. \\n5. **Compute total loss**: \\n \\n $\\\\text{Loss} = \\\\lambda_1 \\\\alpha_t \\\\frac{(1 - \\\\bar{\\\\alpha}_t)}{\\\\beta_t^2} || Z_0 - \\\\hat{Z}_0 ||^2 $\\n $+ \\\\lambda_2 \\\\sum_{m=1}^M w(m) | \\\\hat{r}(m) - r(m) | + \\\\lambda_3 || X - \\\\hat{X} ||^2.$\\n \\n6. **Optimize Diffusion model parameters using gradient descent.**\\n\\n>**W 5**: In lines 340-341 of the article, the author mentions \\\"Z: latent representation obtained from the TPC Encoder during training.\\\" Then why is there no loss term for the TPC Encoder/Decoder in Equation 16? Is CPDD an end-to-end or a two-stage process? Please provide a detailed explanation.\\n\\nWe apologize for not explicitly listing the reconstruction loss function used for TPC training due to an organizational error in writing. We have added the loss function of TPC ($\\\\mathcal{L}_{recon}=||X-\\\\hat{X}||^2$ ) in the paper. As the weakness 4 answers, CPDD is a two-stage process. Initially, we trained the TPC as the latent encoder, followed by utilizing the TPC encoding with fixed weights to acquire the latent encoding for training the diffusion model. Subsequently, the latent code produced by the diffusion model is decoded by TPC decoder to generate the output, completing the generation of the final multivariate long-time series.\\n\\n>**W 6**: What is \\u201cL_{AFC}\\\" in Equation 16? Is there any difference between \\\"L_{AFC}\\\" and \\\"L_{ACF}\\\"?\\n\\nThanks to the reviewer for the careful review, AFC is a writing error and only ACF is the correct notation. $\\\\mathcal{L}_{ACF}$ represents the autocorrelation loss, which is used to guide the model to learn the autocorrelation structure of the input time series.\"}", "{\"title\": \"Reference\", \"comment\": \">**Reference**:\\n\\n[1] Jeha Paul, Bohlke-Schneider Michael, Mercado Pedro, Kapoor Shubham, Singh Nirwan Rajbir, Flunkert Valentin, Gasthaus Jan, and Januschowski Tim. Psa-gan: Progressive self attention gans for synthetic time series, 2022.\\n\\n[2] Hao Ni, Lukasz Szpruch, Magnus Wiese, Shujian Liao, and Baoren Xiao. Conditional sig wasserstein gans for time series generation. arXiv preprint arXiv:2006.05421, 2020.\\n\\n[3]K. Dragomiretskiy and D. Zosso, \\\"Variational Mode Decomposition,\\\" in IEEE Transactions on Signal Processing, vol. 62, no. 3, pp. 531-544, Feb.1, 2014, doi: 10.1109/TSP.2013.2288675. \\n\\n[4]Huang N E, Shen Z, Long S R, et al. \\\"The empirical mode decomposition and the Hilbert spectrum for nonlinear and non-stationary time series analysis.\\\" Proceedings of the Royal Society of London. Series A: mathematical, physical and engineering sciences 454.1971 (1998): 903-995.\\n\\n[5]Ranak Roy Chowdhury, Xiyuan Zhang, Jingbo Shang, Rajesh K Gupta, and Dezhi Hong. Tarnet: Task-aware reconstruction for time-series transformer. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pp. 212\\u2013220, 2022.\\n\\n[6]George Zerveas, Srideepika Jayaraman, Dhaval Patel, Anuradha Bhamidipaty, and Carsten Eick hoff. A transformer-based framework for multivariate time series representation learning. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, KDD \\u201921, pp. 2114\\u20132124, New York, NY, USA, 2021. Association for Computing Machinery. ISBN 9781450383325.\\n\\n[7] Xu Ma, Xiyang Dai, Yue Bai, Yizhou Wang, and Yun Fu. Rewrite the stars. In Proceedings of the\\nIEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024.\"}", "{\"title\": \"Response to Reviewer fwsP\", \"comment\": \">In consideration of baselines, you may not only compare with diffusion models, but also consider other strong deep learning baselines that also designed for time series forecasting.\\n\\nThank you to the reviewers for their time and feedback. We believe there may be a misunderstanding between **generative models** and **general-purpose generative models**\\uff08such as Timer\\uff0cTimeMixer\\uff09. Our work focuses on generative models specifically designed for **multivariate long time series generation**, which differ fundamentally from autoregressive models or general-purpose large models commonly used in **forecasting**. Therefore, a comparison with forecasting models is not applicable to our approach, and we maintain that the specialized domain of time series generation remains significant for further research. \\n\\n>Furthermore, it seems the proposed method lacks of scalability in high dimensional series.\\n\\nThank you for bringing up these concerns about scalability. We want to point out that the simultaneous handling of high channel dimensions (300-800) and long sequences (1024th) in time series generation is actually an unexplored frontier in current research. After thorough literature review, we haven't found any previous work that successfully tackles generation tasks of this scale and complexity. This highlights both the challenging nature of our undertaking and its potential significance for advancing the field. We would welcome any references to works that have achieved similar scalability in time series generation, as this would greatly benefit our research direction.\"}", "{\"summary\": \"This paper analyses the challenges faced in the time series generation task, including the limited ability of the proposed approach to model long-term dependencies due to cumulative errors, the high computational complexity and time overhead due to the attention mechanism, and the inability to capture both long-term global dependencies and short-term local features. Inspired by the spatial distribution of latent variables modelled by LDM, in order to achieve a balance between long-term dependencies and short-run feature learning in time-series generation tasks, it proposes the Compressed Patch Denoising Diffusion-model (CPDD), where Time-series Patch Compressed (TPC) is designed based on the block pattern decomposition method to obtain multi-scale latent representations. And the diffusion model achieves high quality multivariate long time series generation after decoding by modelling the probability distribution of potential representations.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The main contribution of this paper is to propose a technique that can efficiently compress and represent multivariate long-term time series data by decomposing the patches into pattern functions through which long-term dependencies and short-term features are consistently represented. Specifically, the TPC module learns generic combinations of pattern functions from patches to accommodate various patterns and enables a generic compressed representation of time series data.\", \"weaknesses\": \"This paper is dedicated to present a technique that can efficiently compress and model multivariate long-term time series data, which has important real-world implications. The main concerns are as follows:\\n\\n1.\\tAs a model \\u2018designed for multivariate long-term time series\\u2019, the main innovative structures proposed by CPDD, DSConv and TPC, do not have a structure or design aimed at establishing cross-channel connectivity. We believe that a key question is whether the proposed single channel Convolution can establish connectivity across a large number of channels, e.g., the ECL dataset of the electricity Scenarios in the time series prediction contains 321 channels and the Traffic dataset contains 862 channels. Advances in multivariate prediction methods (iTransformer[1], SAMformer[2]) have shown that proper integration of channel management strategies in time series backbones is crucial for discovering univariate dynamics and cross-variate correlations.\\n\\n2.\\tThe lack of advanced baselines leads to the inability to validate the competitiveness of the proposed CPDD. Specifically, only three baselines based on Diffusion are shown in Table 1, and among them, TimeGAN and TimeVAE are published in 2021 and 2019, respectively. The introduction of a wider range of Baselines to compare the performance of the proposed models is expected to be complemented to fully validate the effectiveness of the proposed methods. The referenced baselines can be divided into 4 parts: 1) Models based on pre-trained LLM alignment to TS, e.g. TimeLLM[3]; 2) Pre-trained foundation models on unified time series datasets from multiple domains, e.g. Timer[4], UniTime[5]; 3) Proprietary models trained and tested on specific datasets, e.g. PatchTST[6]; 4) Recent Diffusion-based temporal probabilistic prediction models, e.g. Diffuison-TS[7], mr-Diff[8]. CCPD is expected to be compared with at least one competitive model in each prediction paradigm to demonstrate the soundness of the model design. In addition, we would like to introduce more benchmarks, such as ECL and Traffic datasets with a large number of channel counts, which we believe will help to validate the promising real-world applications of the proposed models.\\n\\n3.\\tThe design of the ablation experiments in this paper is deficient. In addition to DSConv and TPC, CPDD uses other strategies such as Patch Embed and Trend-seasonal Decomposition, yet the ablation experiments presented in Table 2 do not include these structural designs. This raises our concern about the validity of DSConv and TPC.\\n\\n[1] Liu, Yong et al. \\u201ciTransformer: Inverted Transformers Are Effective for Time Series Forecasting.\\u201d ICLR 2024.\\n[2] Ilbert, Romain et al. \\u201cSAMformer: Unlocking the Potential of Transformers in Time Series Forecasting with Sharpness-Aware Minimization and Channel-Wise Attention.\\u201d ICML 2024.\\n[3] Jin, Ming et al. \\u201cTime-LLM: Time Series Forecasting by Reprogramming Large Language Models.\\u201d ICLR 2024.\\n[4] Liu, Yong et al. \\u201cTimer: Generative Pre-trained Transformers Are Large Time Series Models.\\u201d ICML 2024.\\n[5] Liu, Xu et al. \\u201cUniTime: A Language-Empowered Unified Model for Cross-Domain Time Series Forecasting.\\u201d Proceedings of the ACM on Web Conference 2024.\\n[6] Nie, Yuqi et al. \\u201cA Time Series is Worth 64 Words: Long-term Forecasting with Transformers.\\u201d ICLR 2023.\\n[7] Yuan, Xinyu and Yan Qiao. \\u201cDiffusion-TS: Interpretable Diffusion for General Time Series Generation.\\u201d ICLR 2024.\\n[8] Shen, Lifeng et al. \\u201cMulti-Resolution Diffusion Models for Time Series Forecasting.\\u201d ICLR 2024.\", \"questions\": \"1.\\tThe results of Baseline presented in Table 1 are inconsistent with the results presented in the original paper, e.g., the Discriminative Score of the Diffusion-TS model under the Sines dataset in Table 1 is 0.326, whereas it is reported as 0.006 in the original paper.In fact, the results of all Baselines in Table 1 are in significant differences. In addition, Table 1 only shows some of the metrics on the performance of time series generation, and the results of the proposed method on both Context-FID Score and Correlational Score are missing. In addition, traditional metrics for time series forecasting, such as MSE, MAE, CRPS, etc., are missing from Table 1, which results in the reader not getting a full picture of the potential limitations of CPDD.\\n\\n2.\\tIn Table 2, in the Predictive Score metric, the model with DSConv removed achieves better performance in the Sines dataset, and the model with TPC removed exhibits the best performance in the Energy dataset. The results of the ablation experiments are puzzling, which may shake the rationality of the structural design of DSConv and TPC.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer tQBV\", \"comment\": \"We sincerely thank the reviewer for their time, effort, and valuable feedback on our work. We are very grateful to the reviewer for their recognition of our work, which encouraged us to further improve the quality and clarity of the manuscript. The insightful comments and constructive suggestions have significantly helped us identify areas for improvement and further refine our manuscript.\\n\\nWe have carefully considered the reviewer's comments and made corresponding revisions to the manuscript. Below, we address each point raised by the reviewer in detail.\\n\\n>**W1**: Therefore, a detailed analysis of the computational complexity of the proposed CPDD is essential. \\n\\n>**Q3**: Can you provide a detailed comparison of the computational complexity between CPDD and baselines? For example, a table comparing their training and inference time.\\n\\nThanks for the reviewer's insightful suggestion. Despite CPDD utilizing Transformer block, the dimension of input tokens undergoes changes post the embedding layer. Here, the token sequence input length $L$ will decrease to 1/16 to 1/4 of its original size, while the feature dimension $D$ will increase to 2 to 4 times its initial scale. The computational complexity of Transformer is $\\\\mathcal{O}(DL^2+LD^{2})$. When the sequence length dimension is much larger than the feature dimension, the computational complexity will be effectively reduced. The specific comparison of training time and inference time is shown in the following table:\\n\\n| Dataset | Model | Training time (min) | Sample time (min) |\\n|:-------------:|:-------------:|:-------------------:|:-----------------:|\\n| Sines | Diffusion-TS | 146 | 65 |\\n| | CPDD | **63** | **30** |\\n| Electricity | Diffusion-TS | 109 | 495 |\\n| | CPDD | **92** | **65** |\\n| Etth1 | Diffusion-TS | 181 | 111 |\\n| | CPDD | **74** | **42** |\\n| Energy | Diffusion-TS | 185 | 410 |\\n| | CPDD | **128** | **38** |\\n\\nThe shorter training and sampling times of CPDD compared to Diffusion-TS indicate that CPDD effectively reduces the computational complexity.\\n\\n>**W 2**: The visualizations in this paper are generally of high quality. Unifying the font size in Figure 4, especially on the left-side module, to match that of other figures would improve visual consistency and readability.\\n\\nThank the reviewer for highlighting this issue. We agree that unifying the font size in Figure 4, especially in the left-side module, will improve visual consistency and readability. We have modified figure 4 and ensured all figures are consistent in style and presentation in the paper.\\n\\n>**Q 1**: CPDD divides the entire time series into N patches, which might pose a risk of disrupting critical temporal patterns at the boundaries of these patches. Could this potentially affect the model's ability to accurately capture and reproduce these dynamics?\\n\\nWe thank the comment of the reviewer. To address the potential boundary effect issue due to patch segmentation, we implement **two** crucial measures for effective mitigation: \\n\\n1. Inheriting the overlapping patch strategy from PatchTST[1]. Concretely, by configuring a suitable stride to create overlapping regions between neighboring patches, this approach significantly minimizes information fragmentation at the segmentation boundary while preserving the coherence of local temporal features. \\n\\n2. Additionally, within the model architecture, the ConvFFN2 submodule is uniquely crafted within our DSConv module. This submodule focuses on learning the temporal correlations among adjacent patches and reviving temporal dynamic features that could be compromised by patch segmentation. \\n\\nOverall, boundary effects and the preservation of temporal dynamics have been mitigated through overlapping patch segmentation and the ConvFFN2 submodule.\\n\\n>**Q 2**: How does the patch length N impact the model performance? Is there an optimal range of N?\\n\\nWe thank the reviewer for the comments, which we lack a detailed description in the paper. The selection of the Patch length N correlates with the input time series length L, the channel dimension C, and the model's hidden dimension. Consequently, a longer Patch length and a larger channel dimension imply a higher potential for diverse underlying mode functions, necessitating a larger hidden dimension. In the experiment, the hyperparameters for the Time-series patch Compressed (TPC) module are defined as follows: for input lengths ranging from the 64th to the 1024th and channel dimensions between 1 and 50, the Patch length N is set between 4 and 16, with token hidden dimensions varying from 128 to 256.\"}", "{\"title\": \"Response to Reviewer fwsP\", \"comment\": \">**Q 1 (Part B): In addition, traditional metrics for time series forecasting, such as MSE, MAE, CRPS, etc., are missing from Table 1, which results in the reader not getting a full picture of the potential limitations of CPDD.\\n\\nWe thank the reviewer's suggestions. In generation tasks, multiple valid outputs are common. However, MAE and MSE calculations focus on point-to-point errors and do not consider the diversity or semantic similarity between the generated outcomes and the reference targets. We have added the Context-FID[2] and Correlational[3] evaluation metrics to the table below. Context-FID involves extracting features of generated and real sequences using a pre-trained embedding model and calculating the distribution difference in the embedding space to measure the overall quality of the generated data. The Correlational method compares the autocorrelation and cross-correlation distributions of generated data with real data to assess whether it preserves the statistical structure and dependency patterns of the time series. \\n\\n**Context-FID Scores\\u2193**:\\n| Model | Sines | Electricity | ETTh1 | Energy |\\n|---------------|----------------|-----------------|----------------|-----------------:|\\n| CPDD | **5.687\\u00b1.252** | 5.996\\u00b1.294 | **3.238\\u00b1.421** | **1.151\\u00b11.072** |\\n| Diffusion-TS | 13.451\\u00b1.492 | 65.204\\u00b11.853 | 20.568\\u00b12.973 | 67.630\\u00b17.732 |\\n| Timegan | 59.031\\u00b12.223 | 18.365\\u00b1.613 | 10.381\\u00b11.227 | 61.022\\u00b11.893 |\\n| Timevae | 106.981\\u00b1.722 | **4.804\\u00b1.436** | 3.362\\u00b1.432 | 19.862\\u00b1.038 |\\n\\n**Correlational Scores\\u2193**\\uff1a\\n| Model | Sines | Electricity | ETTh1 | Energy |\\n|---------------|----------------|-----------------|----------------|-----------------:|\\n| CPDD | 0.329\\u00b1.005 | 0.198\\u00b1.001 | 0.247\\u00b1.003 | **1.912\\u00b1.001** |\\n| Diffusion-TS | **0.194\\u00b1.003** | 0.213\\u00b1.001 | 0.430\\u00b1.008 | 7.271\\u00b1.007 |\\n| Timegan | 1.392\\u00b1.003 | 0.726\\u00b1.002 | 1.811\\u00b1.001 | 15.581\\u00b1.003 |\\n| Timevae | 4.263\\u00b1.001 | **0.086\\u00b1.001** | **0.155\\u00b1.004** | 2.484\\u00b1.004 |\\n\\nOverall, CPDD demonstrates excellent generation performance, excelling in both Context-FID and Correlational metrics, while also maintaining a good balance between Context consistency and multivariate time series correlation.\\n\\n>**Q 2**\\uff1a In Table 2, in the Predictive Score metric, the model with DSConv removed achieves better performance in the Sines dataset, and the model with TPC removed exhibits the best performance in the Energy dataset. The results of the ablation experiments are puzzling, which may shake the rationality of the structural design of DSConv and TPC.\\n\\nThanks for the comment of the reviewer. We choose the combination structure with the best generalization for each dataset. Specifically, the role of DSConv is closely linked to the dataset's complexity, while the role of TPC is associated with the chosen compression ratio. \\n\\nThe Sines dataset and the Energy dataset are two significantly different datasets. In the Sines dataset, the frequency and phase of the sine wave function for each channel are obtained through random sampling. Its periodicity and relatively simple structure can not fully leverage the capabilities of DSConv to effectively capture complex multivariate dependencies. The Energy dataset represents energy consumption inside buildings and indoor/outdoor temperature and light levels for each channel, exhibiting complex patterns and noise. \\n\\nMoreover, as the experiment utilized all 28 channel dimensions of Energy, we set the feature dimension for compression encoding to 32 to save memory. The compression ratio is as high as (1024th\\u00d728)/(64th\\u00d732)=14. For devices with sufficient memory, CPDD can utilize a lower compression ratio, typically around 2 to 4 times, to decrease the Predictive Score and achieve a superior generation effect.\\n\\n Additionally, during the experiment, we observed that in long-time series generation, as the channel dimension increases, the Discriminative score metric tends to be higher. As shown in the table below, the TPC model trained on the traffic dataset was used to test the reconstructed time series output against the original time series, with different channels randomly selected. \\n\\n| Dataset | Dim=2 | Dim=10 | Dim=30 | Dim=50 |\\n|----------|-------------|-------------|-------------|-------------|\\n| Traffic \\u2193 | 0.034\\u00b10.005 | 0.202\\u00b10.046 | 0.496\\u00b10.004 | 0.499\\u00b10.001 |\\n\\nThe TPC model trained on the Traffic dataset has a reconstruction MSE loss of 0.0869, with the original time series length of 1024th, channel dimension of 50, compression encoding length of 64th, and feature dimension of 32. In the case of training data with long-time series and high channel dimensions, the classification model may experience overfitting.\"}", "{\"title\": \"Response to Reviewer F7YS\", \"comment\": \">**W 4**: While the paper employs a transformer as the encoder within the diffusion model, it is essential to consider the associated computational costs when making comparisons with baseline methods.\\n\\nAlthough our model incorporates Transformer, the dimensions of its input tokens are altered after passing through the embedding layer. The input token sequence length L is reduced to 1/16 to 1/4, while the feature dimension $D$ is doubled to quadrupled. The computational complexity of the Transformer is $\\\\mathcal{O}(DL^2 + LD^2)$. When $L$ is significantly larger than $D$, the computational complexity of CPDD effectively decreases. We conducted comparative tests to supplement the model's training and inference efficiency, with the results presented in the table below. \\n\\n| Dataset | Model | Training time (min) | Sample time (min) |\\n|:-------------:|:-------------:|:-------------------:|:-----------------:|\\n| Sines | Diffusion-TS | 146 | 65 |\\n| | CPDD | **63** | **30** |\\n| Electricity | Diffusion-TS | 109 | 495 |\\n| | CPDD | **92** | **65** |\\n| Etth1 | Diffusion-TS | 181 | 111 |\\n| | CPDD | **74** | **42** |\\n| Energy | Diffusion-TS | 185 | 410 |\\n| | CPDD | **128** | **38** |\\n\\nThe shorter training and sampling times of CPDD compared to Diffusion-TS indicate that CPDD effectively reduces the computational complexity.\\n\\n>**Q 1**: How are the short-term and long-term modes decomposed from the patches?\\n\\nWe are grateful to the reviewers for enabling us to improve this point. \\n\\nThe spontaneous decomposition of time series into long-term and short-term patterns is an inherent characteristic of the model's structure. While the Time-series Patch compressed (TPC) module leverages the moving average decomposition technique, its primary impact lies in accelerating training convergence. \\n\\nThe design of DSConv emphasizes temporal feature learning within the patch, facilitating the identification of short-term patterns. While the ConvFFN2 component targets the boundary information of adjacent patches and single-channel convolution emphasizes the relative scale information of patches across a broader range, the narrow channel design creates an information bottleneck that hinders the excessive development of non-short-term patterns. \\n\\nDSConv introduces the StarNet[7] that explicitly creates a latent space. In this structure, the ConvFFN output acts as the base of this space, and the single-channel convolution output serves as the coefficient within this latent space. This setup constrains the patterns learned by DSConv to focus on the localized, short-term temporal features within the patches. The StarNet structure enforces a form of hierarchical decomposition, emphasizing localized dynamics while suppressing excessive cross-patch interference.\\n\\nIn contrast, Transformer excels in capturing long-distance dependencies and global patterns due to its superior global attention mechanism, giving it a notable edge in modeling long-term features. \\n\\nHence, the integration of DSConv and Transformer can establish structural regularization, facilitating the autonomous separation of modes across the long-term and the short-term.\\n\\n>**Q 2**: What is the rationale for employing the transformer as the encoder of the diffusion model instead of using the transformer directly?\\n\\nThanks to the reviewer for this insightful question regarding the rationale behind employing the transformer as the encoder in the diffusion model. In the context of time series generation tasks, training data is often scarce, and using Transformer directly as a generation model can lead to overfitting. Utilizing a Transformer encoder can enhance global feature extraction capabilities, facilitating the capture of long-term patterns in the time series.\"}", "{\"comment\": \"Thanks for your responses. I appreciate the clarifications and revisions provided. Most of my concerns have been addressed. However, there are still a few points that require further explanation:\\n\\n- What are the lengths of time series used in the updated comparisons of training time and inference time? Please specify the data settings for the empirical computational costs.\\n \\n- My concerns regarding the effect of patch length N on model performance have not been fully addressed, particularly the first part of my question 2. Specifically, is the model's performance consistent across various patch lengths within the possible range? Would it be possible that only certain finely tuned patch lengths yield the superior performance demonstrated in the experiments, while other patch lengths do not maintain this level of effectiveness? Diving into the settings in your response, does the model perform stably across the patch lengths ranging from 4 to 16?\\n \\n\\nFurthermore, the clarified settings (e.g., overlapping patches) and updated results (e.g.. empirical computational costs) in the responses should be involved in the final version of this paper, preferably within an Appendix.\"}", "{\"title\": \"Response to Reviewer afTP\", \"comment\": \"We deeply appreciate the reviewer's thorough evaluation and thoughtful feedback. The suggestions have been instrumental in improving the clarity, rigor, and overall quality of our work.\\n\\nWe have thoroughly reviewed the valuable feedback from the reviewers and have revised the manuscript accordingly. Detailed responses to each comment are provided below for clarity and transparency.\\n\\n>**W 1**: The problem that the article aims to address is confusing; what does \\\"balance between long-term dependencies and short-term feature learning\\\" mean?\\n\\nWe apologize for not clearly articulating the concept of \\\"balancing the learning of long-term dependencies and short-term features\\\". Generally, the long-term dependencies in time series data manifest as trend changes and cyclical fluctuations, while short-term features reflect local variations or instantaneous mutations. Common methods in deep learning to learn long-term dependencies and short-term features involve multiscale feature fusion and multilevel structures. The paper employs a similar multilevel structure approach, initially compressing through TPC into a combination of long-term, short-term, and residual modal functions for representation encoding, and then utilizing a diffusion model to learn the distribution of representation encoding in latent space. \\n\\n>**W 2**: In line 25 of the article, the author mentions \\\"efficiency.\\\" How is this demonstrated in the article? Please compare the model's memory usage and inference time.\\n\\nWe apologize for not clearly articulating this advantage and providing the corresponding analysis on \\\"efficiency.\\\" This aspect primarily pertains to models using attention or Transformer. Our approach incorporates a multiscale feature extraction embedding layer, significantly reducing the number of input tokens while slightly increasing the feature dimension. The input token sequence length L is shortened from 1/16 to 1/4, with the feature dimension d doubling to quadrupling. The computational complexity of the Transformer is $\\\\mathcal{O}(d L^2 + Ld^2)$. When the sequence length dimension is significantly larger than the feature dimension, the computational complexity effectively decreases. The specific training and inference time comparative results are presented in the table below:\\n\\n| Dataset | Model | Training time (min) | Sample time (min) |\\n|:-------------:|:-------------:|:-------------------:|:-----------------:|\\n| Sines | Diffusion-TS | 146 | 65 |\\n| | CPDD | **63** | **30** |\\n| Electricity | Diffusion-TS | 109 | 495 |\\n| | CPDD | **92** | **65** |\\n| Etth1 | Diffusion-TS | 181 | 111 |\\n| | CPDD | **74** | **42** |\\n| Energy | Diffusion-TS | 185 | 410 |\\n| | CPDD | **128** | **38** |\\n\\nThe shorter training and sampling times of CPDD compared to Diffusion-TS indicate that CPDD effectively reduces the computational complexity.\\n\\n>**W 3**: In the embedding space shown in Figure 1, what do \\\"distant\\\" and \\\"nearby\\\" mean? This is quite confusing.\\n\\nWe apologize for not making this clear. The terms \\\"nearby\\\" and \\\"distant\\\" refer to the magnitude of the Euclidean distance between two embedding vectors. Specifically, \\\"nearby\\\" indicates a small distance between two embedding vectors, implying that the time series segments corresponding to these vectors exhibit higher similarity in terms of features or patterns learned by the model; whereas \\\"distant\\\" signifies a large distance between two vectors, indicating significant differences in features or patterns for the corresponding time series segments.\"}", "{\"title\": \"Response to Reviewer fwsP\", \"comment\": \">**W 3**:The design of the ablation experiments in this paper is deficient. In addition to DSConv and TPC, CPDD uses other strategies such as Patch Embed and Trend-seasonal Decomposition, yet the ablation experiments presented in Table 2 do not include these structural designs. This raises our concern about the validity of DSConv and TPC.\\n\\nWe apologize for not thoroughly testing the roles of fundamental components. We have added two ablation experiments. Firstly, to assess the impact of patch embedding, we introduced tests with patch=1 in the table for comparison with the original patch=16. Secondly, to evaluate the effectiveness of the trend-seasonal decomposition method, we conducted individual tests for the trend and seasonal components.\\n\\n| **Dataset** | **Method** | **Discriminative Score\\u2193** | **Predictive Score\\u2193** |\\n|--------------------------------|------------------------|--------------------------:|----------------------:|\\n| **ETTh1** | Patch size = 1 | 0.499\\u00b1.001 | **0.751\\u00b1.011** |\\n| | Patch size = 16 (CPDD)| **0.352\\u00b1.082** | **0.751\\u00b1.021** |\\n| | Trend-only | 0.499\\u00b1.001 | 0.792\\u00b1.012 |\\n| | Season-only | 0.499\\u00b1.001 | 0.789\\u00b1.014 |\\n\\n| **Dataset** | **Method** | **Discriminative Score\\u2193** | **Predictive Score\\u2193** |\\n|--------------------------------|------------------------|--------------------------:|----------------------:|\\n| **Energy** | Patch size = 1 | 0.499\\u00b1.001 | **0.966\\u00b1.001** |\\n| | Patch size = 16 (CPDD)| **0.488\\u00b1.004** | 0.972\\u00b1.003 |\\n| | Trend-only | 0.499\\u00b1.001 | 0.988\\u00b1.002 |\\n| | Season-only | 0.499\\u00b1.001 | 0.982\\u00b1.004 |\\n\\nThe table shows that the CPDD method (Patch size = 16) achieves the best Discriminative Score across both datasets and competitive Predictive Score, demonstrating its effectiveness in capturing meaningful patterns.\\n\\n>**Q 1 (Part A): The results of Baseline presented in Table 1 are inconsistent with the results presented in the original paper, e.g., the Discriminative Score of the Diffusion-TS model under the Sines dataset in Table 1 is 0.326, whereas it is reported as 0.006 in the original paper.In fact, the results of all Baselines in Table 1 are in significant differences.\\n\\nWe apologize for the unclear expression that led to the reviewer's misunderstanding. In Diffusion-TS, the experimental setting involves comparing the generation of multi-variable time series with a length of 24th, while our experiment compares multi-variable time series with a length of 1024th. \\n\\nAdditionally, we uniformly applied standard normalization in both Discriminative Score evaluation and Discriminative Score evaluation. While our approach primarily focuses on generating multivariate long-time series, we believe it is necessary to also test it on shorter time series to evaluate its functionality. We conducted tests on the sequence of lengths 64th in ETTh1, and the results are presented in the table below. (The baseline model results are cited from Diffusion-TS[1].)\\n\\n| Metric\\u2193 | CPDD | Diffusion-TS | TImegan | Timevae | Diffwave | DiffTime | Cot-GAN |\\n|---------------------|-------------|--------------|--------------|--------------|--------------|--------------|--------------|\\n| Context-FID Score | **0.495\\u00b1.135** | 0.631\\u00b1.058 | 1.130\\u00b1.102 | 0.827\\u00b1.146 | 1.543\\u00b1.153 | 1.279\\u00b1.083 | 3.008\\u00b1.277 |\\n| Correlational Score | 0.183\\u00b1.001 | 0.082\\u00b1.005 | 0.483\\u00b1.019 | **0.067\\u00b1.006** | 0.186\\u00b1.008 | 0.094\\u00b1.010 | 0.271\\u00b1.007 |\\n| Discriminative Score | 0.112\\u00b1.089 | **0.106\\u00b1.048** | 0.227\\u00b1.078 | 0.171\\u00b1.142 | 0.254\\u00b1.074 | 0.150\\u00b1.003 | 0.296\\u00b1.348 |\\n| Predictive Score | **0.102\\u00b1.005** | 0.116\\u00b1.000 | 0.132\\u00b1.008 | 0.118\\u00b1.004 | 0.133\\u00b1.008 | 0.118\\u00b1.004 | 0.135\\u00b1.003 |\\n\\nOverall, our model demonstrates consistently strong performance in both long-time series and short-time series, showcasing its robustness across varying sequence lengths.\"}", "{\"summary\": \"This paper presents a diffusion-based method for time series generation that integrates a patch compression module with trend-seasonal decomposition to enhance generation performance.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. This paper effectively leverages a patch compression method to capture complex long-term and short-term dependencies in time series data.\\n\\n2. The authors employ trend-seasonal decomposition to facilitate the diffusion model's ability to learn complex distribution shifts.\", \"weaknesses\": \"1. The evaluation experiments presented in the paper are insufficient to convincingly demonstrate the effectiveness of the proposed method. Specifically, more common used evaluation metrics need to be added (like MSE, MAE, .etc), and the selection of baseline methods (both the diffusion-based methods and the transformer-based methods should be compared) and datasets is not comprehensive enough to provide a robust comparison.\\n\\n2. The formulation of the paper needs significant improvement; the organization and clarity of the text make it difficult to identify the key ideas and contributions. \\n\\n3. The integration of the proposed patch compression method with seasonal-trend decomposition seems to offer limited novelty, as this combination may be viewed as a relatively minor contribution to the existing body of work in this area.\\n\\n4. While the paper employs a transformer as the encoder within the diffusion model, it is essential to consider the associated computational costs when making comparisons with baseline methods.\", \"questions\": \"1. How are the short-term and long-term modes decomposed from the patches?\\n\\n2. What is the rationale for employing the transformer as the encoder of the diffusion model instead of using the transformer directly?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This paper uses a Time-series Patch Compression (TPC) module to decompose time series patches into mode functions, capturing representations of long-and-short-term features. Next, a diffusion model with a CNN backbone is models changes to the latent distributions and is used to generate multivariate long-term time series. The model is tested on standard benchmarks in time-series modeling and generation. The primary claim for this work is that CPDD \\\"addresses the challenges of balancing long-term temporal dependencies and short-term feature representations by integrating the TPC module with a diffusion-based generative model\\\". The integration of diffusion based models with existing\\n\\nOverall, this was an empirically driven deep learning paper and at the end of the reviews and discussion period remained a borderline paper that fell on the side of rejection. Despite the commendable changes the authors made to the manuscript (which have improved it), I think it came down to a lack of clarity on precisely how the TPC module helps balance long vs short term features. I think this can potentially be improved by a rewrite of the manuscript to highlight this aspect as well as thorough experimentation along the ablations recommended by reviewer fwsP.\", \"additional_comments_on_reviewer_discussion\": \"The reviewers requested comparisons of computational complexity between CPDD and diffusion models to understand if CPDD reduces training and inference time despite incorporating a Transformer. The authors provided tables comparing training and sampling times on multiple datasets, showing shorter runtimes. The reviewers requested more standard metrics (e.g., MSE, MAE) and further comparisons with additional diffusion- and Transformer-based baselines. The authors agreed that standard metrics alone are insufficient for generative tasks and included Context-FID and Correlational scores. The overall CPDD pipeline was unclear, especially regarding the TPC module\\u2019s reconstruction loss, and whether CPDD is trained end-to-end or in two stages.They provided pseudocode for both stages and reaffirmed that the TPC loss function will be explicitly added to the final manuscript.\\n\\nPerhaps the most important point the reviewers raised was that they were not able to come to a satisfactory conclusion on was this idea of \\u201cbalancing\\u201d long-term dependencies with short-term feature learning. Specifically, it was unclear how CPDD distinguished itself from existing multi-scale methods. The response stated that DSConv handled local (intra-patch) patterns, while the Transformer captures global (inter-patch) dependencies. The core claim by the authors is that unlike typical multi-scale methods, CPDD specifically addresses optimization challenges for generative modeling by decomposing the learning objectives. However, I think this last point remained unclear from my reading of the revised manuscript.\"}", "{\"summary\": \"This paper introduces CPDD, a method for time series generation, which addresses challenges in balance between long-term dependencies and short-term feature learning\\u0002. It utilizes a patch Compressed module based on the patch mode decomposition method to obtain the latent encoding of multi-scale feature of time series. It utilizes a diffusion-based model to learn the latent distribution and decode the resulting samples, which achieves state-of-the-art performance in the generation task of multivariate long-time series with efficiency and robustness.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. Methodology: CPDD is a patch compression method to capture complex long-term and short-term dependencies more effectively with the diffusion model for high-quality samples generation.\\n2. Empirical Results: The numerical results presented in the paper are compelling, showing significant improvements over competing \\napproaches in terms of generated time series quality. This empirical evidence supports the effectiveness of the proposed method.\\n3. Proofs: The author gives a detailed formal proof of the effectiveness of DSConv blocks' structural regularization.\", \"weaknesses\": \"1. The problem that the article aims to address is confusing; what does \\\"balance between long-term dependencies and short-term feature learning\\\" mean?\\n2. In line 25 of the article, the author mentions \\\"efficiency.\\\" How is this demonstrated in the article? Please compare the model's memory usage and inference time.\\n3. In the embedding space shown in Figure 1, what do \\\"distant\\\" and \\\"nearby\\\" mean? This is quite confusing.\\n4. The entire CPDD process is quite confusing. please provide a specific implementation process or corresponding pseudocode?\\n5. In lines 340-341 of the article, the author mentions \\\"Z: latent representation obtained from the TPC Encoder during training.\\\" Then why is there no loss term for the TPC Encoder/Decoder in Equation 16? Is CPDD an end-to-end or a two-stage process? Please provide a detailed explanation.\\n6. What is \\u201cL_{AFC}\\\" in Equation 16? Is there any difference between \\\"L_{AFC}\\\" and \\\"L_{ACF}\\\"?\\n7. The writing issues in the article are evident, with many sentences being difficult to understand, and the challenge that article aims to address is not clearly defined.\", \"questions\": \"Please refer to the Weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"I appreciate your updates. My confusion has been clarified. Generally, this paper provides some new insights into multi-scale time series generation. I would like to rate it as \\\"weakly accept\\\", but no such option is provided. So, I will maintain my initial score.\"}", "{\"title\": \"Response to Reviewer tQBV\", \"comment\": \"I'm pleased to address the concerns you have raised.\\n>What are the lengths of time series used in the updated comparisons of training time and inference time? Please specify the data settings for the empirical computational costs.\\n\\nThe length of time series used in the updated training and inference time comparisons is 1024th with 7channels. We have now updated the detailed hyperparameters in Table 5 in section A.1 of the appendix. \\n\\n> My concerns regarding the effect of patch length N on model performance have not been fully addressed, particularly the first part of my question 2. Specifically, is the model's performance consistent across various patch lengths within the possible range? Would it be possible that only certain finely tuned patch lengths yield the superior performance demonstrated in the experiments, while other patch lengths do not maintain this level of effectiveness? \\n\\nDifferent patch size settings can yield good performance, but the range of patch sizes that exhibit the most balanced performance across various evaluation tests is relatively narrow. Given that the time series generation task necessitates evaluating the quality of generated data from multiple viewpoints, opting for a patch size that balances performance is justifiable. The comparison experiments of different patch sizes and their detailed hyperparameter settings have been revised and are now presented in Tables 6 and 7 in Part A.2 of the Appendix.\\n\\n>Diving into the settings in your response, does the model perform stably across the patch lengths ranging from 4 to 16?\\n\\nBased on the experimental results, the performance remains relatively stable. We can roughly estimate the optimal patch size setting and the corresponding feature dimension setting by considering the length and the number of channels in the input time series. \\n\\nWe adhere to a specific hyperparameter setting rule to ensure that the compression ratio of TPC (length of input time series \\u00d7 number of channels / length of compressed time series \\u00d7 feature dimension) falls within the range of 1 to 4. Some exceptions may arise, particularly when dealing with a high number of channels in the input time series, as demonstrated in Experiment 1 with 28 energy channels and a compression ratio of 14, resulting in inadequate memory allocation.\"}", "{\"title\": \"Reference\", \"comment\": \"[1] XinyuYuanandYanQiao.Diffusion-TS:Interpretablediffusionforgeneraltimeseriesgeneration.\\n InTheTwelfthInternationalConferenceonLearningRepresentations,2024.\\n\\n[2] Jeha Paul, Bohlke-Schneider Michael, Mercado Pedro, Kapoor Shubham, Singh Nirwan Rajbir, Flunkert Valentin, Gasthaus Jan, and Januschowski Tim. Psa-gan: Progressive self attention gans for synthetic time series, 2022.\\n\\n[3] Hao Ni, Lukasz Szpruch, Magnus Wiese, Shujian Liao, and Baoren Xiao. Conditional sig wasserstein gans for time series generation. arXiv preprint arXiv:2006.05421, 2020.\"}", "{\"title\": \"Response to Reviewer afTP\", \"comment\": \">**W 7**: The writing issues in the article are evident, with many sentences being difficult to understand, and the challenge that article aims to address is not clearly defined.\\n\\nWe apologize for not using more easily understandable sentences due to our lack of writing skills. To better articulate the ideas and methods of the paper, we will seek guidance from more experienced researchers to enhance the organization and expression of the paper. This study aims to address the key challenges in generating multi-dimensional long-time series, focusing on the problem of generating multi-dimensional time series with lengths exceeding 256th. This issue remains unexplored in existing research, especially in complex domains such as electricity and transportation, where there is a high demand for long-time series generation, yet the performance of existing methods is still limited. \\n\\nMulti-dimensional long-time series generation not only requires generating long-time series with high fidelity but also maintaining complex dynamic patterns in the generated data and dependencies between variables. Taking the electricity and transportation sectors as examples, analytical tasks often rely on long-span time series data to capture seasonal trends, periodic fluctuations, and interactive patterns among multiple variables. However, existing generation models (such as frameworks based on autoregressive models, generative adversarial networks, or diffusion models) mostly focus on data sequences of lengths not exceeding 256th, making it challenging to effectively extend to longer sequences, especially in multi-variable settings. \\n\\nWe believe that the challenge in generating multi-dimensional long-time series lies in existing methods' inability to simultaneously consider long-term dependencies and short-term feature learning. Therefore, we explored a method to effectively compress and encode multi-dimensional long-time series into shorter time series. This method utilizes a time series patch mode decomposition technique to break down patches into long-term, short-term, and residual mode functions, effectively preserving long-term dependencies and short-term feature information in the compression encoding, thereby laying the foundation for the potential diffusion generation in the second stage. We will promptly make the revision in the manuscript.\"}", "{\"title\": \"Reference\", \"comment\": \">Reference:\\n\\n[1] Nie, Yuqi et al. \\u201cA Time Series is Worth 64 Words: Long-term Forecasting with Transformers.\\u201d ICLR 2023.\"}", "{\"title\": \"Response to Reviewer F7YS\", \"comment\": \">**W 2**: The formulation of the paper needs significant improvement; the organization and clarity of the text make it difficult to identify the key ideas and contributions.\\n\\nWe sincerely apologize for our poor expression. We will promptly revise the relevant statements in the paper. Allow us to attempt to introduce the design context of CPDD in the following. In order to address the challenge of generating multivariate long-time series, we seek to extend the method of image latent diffusion generation to the temporal domain. However, despite the excellent performance of the VAE latent encoder in image latent generation, it faces the following issues in temporal latent diffusion generation: \\n1. Insufficient temporal dependency: VAE and other temporal latent encoders cannot effectively capture both long-term dependencies and short-term features. \\n2. Limited latent space distribution: In temporal diffusion, the uneven distribution of data (such as low sample density in certain time periods) may lead to uneven representations in the latent space initialized by VAE and other latent encoders, potentially causing the diffusion process to favor generating samples in high-density areas. \\n3. High latent encoding dimension: Unlike the high redundancy of image information, temporal information has lower redundancy. To achieve high-quality generation, VAE and other latent encoders may require a very high latent dimension in the initialized latent space. \\n\\nTherefore, our core idea is to seek a novel latent encoder that can generalize the representation of multivariate long-time series with a lower latent dimension while effectively preserving their temporal dependency. This encoder should possess the ability to efficiently compress a wide range of temporal data and maintain dynamic features in the encoding rather than solely static representations. Combining the above analysis, our contribution lies in proposing the temporal Patch modal decomposition technique to meet the aforementioned requirements and introducing the framework of CPDD, achieving high-quality generation of multivariate long-time series.\\n\\n>**W 3**: The integration of the proposed patch compression method with seasonal-trend decomposition seems to offer limited novelty, as this combination may be viewed as a relatively minor contribution to the existing body of work in this area.\\n\\nThanks the reviewer\\u2019s constructive feedback regarding the integration of the patch compression method with seasonal-trend decomposition. We appreciate the perspective and will further emphasize how this combination contributes to addressing specific challenges in long-term time series generation, as well as clarify its novelty compared to existing methods.\\nWe acknowledge that the approach of time-series mode function decomposition is not entirely novel and that traditional time series analysis methods such as VMD[3] and EMD[4] effectively utilize this concept. \\n\\nHowever, the integration of mode function decomposition with deep learning for enhanced time series analysis remains a relatively unexplored direction. Specifically, EMD and VMD methods lack the capability to parallelly decompose time series using GPU, hindering direct integration with deep learning techniques. In contrast, the DSConv module introduced in this study facilitates the model in learning a series of Patch mode functions to achieve a more comprehensive representation. \\n\\nCPDD utilizes time series patches as the fundamental unit for decomposition, aiming to simplify the process of VMD or EMD. Each time series patch is a composition of long-term, short-term, and residual patterns. Limiting the patch length to a range of 4 to 16 enables us to efficiently employ processing components for long-term, short-term, and residual modes, leading to successful decomposition outcomes. Employing Time-series Patch mode function decomposition technology results in a compression ratio ranging from 1 to 14 times (the original data volume/compressed data volume). This approach outperforms the conventional deep learning-based time series compression encoder in achieving superior compression outcomes[5-6].\"}", "{\"title\": \"Thank you for your response\", \"comment\": \"Thank you for your response, which has addressed most of my concerns. However, I still cannot agree with the challenge that CPDD aims to address. In both the submitted paper and the responses provided by the authors, the meaning of \\u201cbalance between long-term dependencies and short-term feature learning\\u201d has not been adequately addressed. The authors continue to mention that CPDD employs a decoupled approach to learning long- and short-term multi-scale features in time series, but this idea is quite common in time series domain, as demonstrated by works such as Scaleformer [1], Moirai [2], etc. Therefore, I keep my score.\\n\\n[1]. Shabani, Mohammad Amin, et al. \\\"Scaleformer: Iterative Multi-scale Refining Transformers for Time Series Forecasting.\\\" The Eleventh International Conference on Learning Representations. (ICLR2023)\\n\\n[2]. Woo, Gerald, et al. \\\"Unified Training of Universal Time Series Forecasting Transformers.\\\" Forty-first International Conference on Machine Learning. (ICML2024)\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Response to Reviewer fwsP\", \"comment\": \">**W 1**: As a model \\u2018designed for multivariate long-term time series\\u2019, the main innovative structures proposed by CPDD, DSConv and TPC, do not have a structure or design aimed at establishing cross-channel connectivity.We believe that a key question is whether the proposed single channel Convolution can establish connectivity across a large number of channels, e.g., the ECL dataset of the electricity Scenarios in the time series prediction contains 321 channels and the Traffic dataset contains 862 channels. Advances in multivariate prediction methods (iTransformer[1], SAMformer[2]) have shown that proper integration of channel management strategies in time series backbones is crucial for discovering univariate dynamics and cross-variate correlations.\\n\\nWe apologize for the confusion caused by the unclear expression. CPDD has structures designed for managing cross-channels, specifically, the proposed DSConv in the paper inherits the point convolution structure of depthwise separable convolution, learning the relationships between different variables in patches through point convolution operations. Single-channel convolutions have convolution kernels with a large size (typically 4 times the depthwise convolution kernel), aiming to learn the relative scale relationships between different patch mode functions. \\n\\nAs the reviewer mentioned, cross-channel correlations have an impact on time series tasks. However, the correlations between variables in time series datasets from different domains vary. For instance, in power data, the relationships between variables represent different load nodes, including spatial topology and the balance between grid load and generation. To systematically address the issue of generating cross-channel correlations in time series, it is crucial to consider the differences in time series from various domains. \\n\\nThis challenge may require an additional paper to address. Generating long-time series (1024th and above) for power data and traffic data with hundreds of nodes is quite challenging. \\n\\n>**W 2 (Part A)**: The lack of advanced baselines leads to the inability to validate the competitiveness of the proposed CPDD. Specifically, only three baselines based on Diffusion are shown in Table 1, and among them, TimeGAN and TimeVAE are published in 2021 and 2019, respectively. The introduction of a wider range of Baselines to compare the performance of the proposed models is expected to be complemented to fully validate the effectiveness of the proposed methods. The referenced baselines can be divided into 4 parts: 1) Models based on pre-trained LLM alignment to TS, e.g. TimeLLM[3]; 2) Pre-trained foundation models on unified time series datasets from multiple domains, e.g. Timer[4], UniTime[5]; 3) Proprietary models trained and tested on specific datasets, e.g. PatchTST[6]; 4) Recent Diffusion-based temporal probabilistic prediction models, e.g. Diffuison-TS[7], mr-Diff[8].\\n\\nThanks to the reviewer for this valuable suggestion and the detailed categorization of baselines. One misunderstanding is that Timegan and TimeVAE are not diffusion models; only Diffusion-TS is a Diffusion model. \\n\\nWe acknowledge the current lack of sufficient time series generation baselines for comparison. We will promptly test more time series generation baseline models and update them in the comments section or paper appendix. \\n\\nTimeLLM\\u3001Timer\\u3001UniTime and PatchTST and mr-Diff are outstanding contributions in the field of time series forecasting. However, they are not typically employed for direct time series generation.\\n\\n>**W 2 (Part B)**: In addition, we would like to introduce more benchmarks, such as ECL and Traffic datasets with a large number of channel counts, which we believe will help to validate the promising real-world applications of the proposed models.\\n\\nThanks for the reviewer's comment. We have measured the Electricity dataset and the Energy dataset in the Table 1 of the paper. As mentioned in weakness 1, generating ECL and Traffic datasets with a large number of channel counts poses significant challenges for both the CPDD model and current mainstream generation methods. This challenge is not only reflected in the model's generation capabilities but also in designing appropriate metric methods for these complex datasets. \\n\\nWe have also dabbed in spatio-temporal predictive analytics in the electricity and transportation domains. We believe that the synthesized ECL and Traffic data should exhibit characteristics that adhere to real-world physical constraints and can be directly used for power and traffic analysis tasks. Addressing these challenges requires exploring the design of more specialized synthesis frameworks and evaluation methods, which would be better suited for publication in journals or conferences within the fields of power or traffic analysis.\"}", "{\"title\": \"Thank you for your response\", \"comment\": \"Thank you for your response. I appreciate the efforts, and it seems some of my concerns were not well addressed. In consideration of baselines, you may not only compare with diffusion models, but also consider other strong deep learning baselines that also designed for time series forecasting. Furthermore, it seems the proposed method lacks of scalability in high dimensional series. I will remain my scores.\"}", "{\"summary\": \"This paper aims to address the challenge of balancing the long-term dependencies and short-term features in time series generation and proposes a novel model named Compressed Patch Denoising Diffusion-model (CPDD). The proposed approach first employs a Time-series Patch Compression (TPC) module to decompose time series patches into mode functions, effectively capturing latent representations of both long-term and short-term features. Afterward, a diffusion-based model with a CNN backbone is designed to learn the latent distributions and generate multivariate long-term time series. Experimental results demonstrate that CPDD achieves the SOTA performance in time series generation. Furthermore, \\u00a0the robustness and generalization capabilities of the TPC module are rigorously verified.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The generative modeling approach of decomposing time series patches into mode functions presented in this paper is novel and well-positioned in the literature on time series generation, to the best of my knowledge.\", \"The introduction of the Time-series Patch Compression (TPC) module marks a notable innovation in time series modeling. This module provides a robust alternative to the commonly used autoencoder-based compression methods and trend-seasonality decomposition techniques. The exploration of its robustness and generalization is particularly noteworthy\", \"Figures 1 and 2 provide a clear illustration of the proposed approach, making the core designs easier to understand.\"], \"weaknesses\": [\"This paper identifies high computational demands as a limitation of existing methods, but the proposed approach also employs a computationally intensive Transformer-based architecture. Therefore, a detailed analysis of the computational complexity of the proposed CPDD is essential.\", \"The visualizations in this paper are generally of high quality. Unifying the font size in Figure 4, especially on the left-side module, to match that of other figures would improve visual consistency and readability.\"], \"questions\": \"1. CPDD divides the entire time series into N patches, which might pose a risk of disrupting critical temporal patterns at the boundaries of these patches. Could this potentially affect the model's ability to accurately capture and reproduce these dynamics?\\n2. How does the patch length N impact the model performance? Is there an optimal range of N?\\n3. Can you provide a detailed comparison of the computational complexity between CPDD and baselines? For example, a table comparing their training and inference time.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thank you for your response\", \"comment\": \">However, I still cannot agree with the challenge that CPDD aims to address. In both the submitted paper and the responses provided by the authors, the meaning of \\u201cbalance between long-term dependencies and short-term feature learning\\u201d has not been adequately addressed.\\n\\nThank you for raising this important question about our method's advantages. \\nCPDD achieves an effective balance between long-term dependencies and short-term features through a **divide-and-conquer approach**. By dividing the time series into patches, we decompose the complex pattern learning problem into two distinct sub-tasks: local feature (short-term feature) learning within patches and global dependency (long-term dependency) learning between patches. These two levels of features are processed independently using specialized components (Transformer for inter-patch relationships and DSConv for intra-patch patterns) and then merged into a unified latent space representation. This hierarchical processing strategy enables more efficient and focused learning at different temporal scales, representing an innovative approach not found in existing temporal generation methods. \\n\\n>The authors continue to mention that CPDD employs a decoupled approach to learning long- and short-term multi-scale features in time series, but this idea is quite common in time series domain, as demonstrated by works such as Scaleformer [1], Moirai [2], etc. Therefore, I keep my score.\\n\\nWe appreciate your careful examination of our method's novelty. We'd like to clarify a fundamental difference between CPDD and existing multi-scale approaches in time series analysis:\\nThe key distinction lies not in whether we use multi-scale processing, but in why and how we apply it. While Scaleformer and Moirai utilize multi-scale architectures primarily for feature extraction and representation, CPDD approaches the problem from an **optimization perspective**. Our method specifically addresses the computational and optimization challenges in temporal pattern generation through **structured decomposition**:\", \"problem_formulation\": \"Traditional multi-scale methods focus on learning better representations at different time scales\\nCPDD, however, uses decomposition to solve the inherent optimization conflict between capturing local patterns and modeling long-range dependencies.\", \"solution_strategy\": \"Instead of parallel multi-scale feature extraction, CPDD employs a structured divide-and-conquer approach to break down the learning objective. \\nThis decomposition significantly reduces the optimization difficulty by allowing specialized components to focus on distinct aspects of the temporal dynamics.\", \"generation_focus\": \"While existing methods excel at feature extraction for tasks like classification and prediction, CPDD is specifically designed for the unique challenges of temporal pattern generation. \\nOur approach enables more controlled and efficient pattern synthesis by separately handling local details and global structure.\\n\\nThis fundamental difference in problem formulation and solution strategy distinguishes CPDD from existing multi-scale approaches. We apologize for not making this distinction clearer in the paper and would appreciate the opportunity to better articulate these differences.\"}" ] }
4es2oO9tw1
Compute-Constrained Data Selection
[ "Junjie Yin", "Alexander M Rush" ]
Data selection can reduce the amount of training data needed to finetune LLMs; however, the efficacy of data selection scales directly with its compute. Motivated by the practical challenge of compute-constrained finetuning, we consider the setting in which both the cost of selecting data and training are budgeted for. We first formalize the problem of data selection with a cost-aware utility function, and model the data selection problem as trading off initial-selection cost for training gain. We run a comprehensive sweep of experiments across multiple tasks, varying compute budget by scaling finetuning tokens, model sizes, and data selection compute. Interestingly we find that many powerful data selection methods are almost never compute-optimal, and that cheaper data selection alternatives dominate both from a theoretical and empirical perspective. For compute-optimal training, we find that perplexity and gradient data selection require training-to-selection model size ratios of 5x and 10x, respectively.
[ "Data Selection", "Scaling Laws", "Compute-constrained", "Compute-optimal Training." ]
Accept (Poster)
https://openreview.net/pdf?id=4es2oO9tw1
https://openreview.net/forum?id=4es2oO9tw1
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zhIkBO50gX", "uawsdKHR8t", "u2pffVgcFw", "tgu87jNLp3", "kuH5o1yLTK", "gXJanwChJu", "fO1helfoEh", "Zod8SvdZMM", "Yq7mLK2s0s", "YLcKyxM7vU", "VfBCFIBP88", "Vb7bAZiPTH", "Szgk5l5FE9", "ROAEitJRHT", "R2q7DkpEF2", "LIw9W1uLtt", "KbSXEZcC7a", "JvdIxtcbUK", "Dm65QZWfdq", "DZwu6dFxb8", "4qoDEv6WYv", "1SVcQrnofR", "0sDoqlvV3c" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1734703813171, 1732622626664, 1732483733689, 1732232657868, 1732232530178, 1730455744601, 1732229249863, 1732574958112, 1732421496422, 1732229449020, 1730699189784, 1732229821747, 1732233158210, 1732231977567, 1737524180442, 1732229111150, 1730598104094, 1732231399621, 1730745228937, 1732228505416, 1732228834921, 1732230592383, 1733104478034 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission12314/Area_Chair_VHa3" ], [ "ICLR.cc/2025/Conference/Submission12314/Reviewer_B9fL" ], [ "ICLR.cc/2025/Conference/Submission12314/Reviewer_mMXd" ], [ "ICLR.cc/2025/Conference/Submission12314/Authors" ], [ "ICLR.cc/2025/Conference/Submission12314/Authors" ], [ "ICLR.cc/2025/Conference/Submission12314/Reviewer_B9fL" ], [ "ICLR.cc/2025/Conference/Submission12314/Authors" ], [ "ICLR.cc/2025/Conference/Submission12314/Reviewer_Z24H" ], [ "ICLR.cc/2025/Conference/Submission12314/Reviewer_9BiK" ], [ "ICLR.cc/2025/Conference/Submission12314/Authors" ], [ "ICLR.cc/2025/Conference/Submission12314/Reviewer_Z24H" ], [ "ICLR.cc/2025/Conference/Submission12314/Authors" ], [ "ICLR.cc/2025/Conference/Submission12314/Authors" ], [ "ICLR.cc/2025/Conference/Submission12314/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission12314/Authors" ], [ "ICLR.cc/2025/Conference/Submission12314/Reviewer_mMXd" ], [ "ICLR.cc/2025/Conference/Submission12314/Authors" ], [ "ICLR.cc/2025/Conference/Submission12314/Reviewer_9BiK" ], [ "ICLR.cc/2025/Conference/Submission12314/Authors" ], [ "ICLR.cc/2025/Conference/Submission12314/Authors" ], [ "ICLR.cc/2025/Conference/Submission12314/Authors" ], [ "ICLR.cc/2025/Conference/Submission12314/Authors" ] ], "structured_content_str": [ "{\"metareview\": \"The paper studies the cost of data selection by formalising the trade-off between data selection cost and training gain. The authors validate their insights experimentally. They show for instance that one has to pay attention to the computational cost of sophisticated model selection techniques and that simpler (and cheaper) methods might be preferable in practice. Authors revised the paper and appendices based on the feedback of the reviewers, addressing their comments and providing additional experimental evidence.\", \"additional_comments_on_reviewer_discussion\": \"The authors adequately clarified the concerns raised by the reviewers. After rebuttal, all reviewers trended towards acceptance. As noted by one of the reviewers during the discussion, it is not surprising to see the tradeoffs discussed in this work, but as another reviewer indicated, few works in data selection show that powerful data selection techniques are costly from a computational point of view and thus not always useful in practice. Overall, the reported results makes this paper interesting to the community and worth sharing.\"}", "{\"comment\": \"I would like to thank the authors for the revised manuscript and the response to my questions.\\n\\nI have to say that I am still doubtful regarding the greedy approach that scores the datapoints individually. The approach in (Kirchhoff & Bilmes, 2014) does not score the points individually, as outlined in Algorithm 1 of their paper. The function $f$ scores the candidate points $v$ dependent on the set of already selected points $X_i$. Thus, the algorithm also exhibits quadratic runtime complexity.\\n\\nIf the points are first scored individually and afterwards simply ranked and selected until the budget is exhausted, interactions like redundancy cannot be captured. Needless to say, my example of copying the most informative point $K$ times was a constructed and unrealistic example, but the problem remains: Similar/redundant datapoints would be selected. I think this is a fundamental problem. As none of the other reviewers commented on this, I would like AC to have a look at it and will change my rating from 6 to 5.\\n\\nApart from the aforementioned point, the authors response was clarifying my questions and I also like the premise of the paper, not introducing a novel method but comparing the currently available methods with a hollistic focus on compute for the overall process of selection and fine tuning.\\n\\n_References_: \\n\\n(Kirchhoff & Bilmes, 2014) Kirchhoff, Katrin, and Jeff Bilmes. \\\"Submodularity for data selection in machine translation.\\\" Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). 2014.\"}", "{\"comment\": \"Thank the authors for their efforts. I'm glad to see some vital clarifications on the key concepts (which I did not fully understand before). I am not so opposed to seeing this work published as I was before, but I am still not very convinced of its significance. I will adjust my rating from 3 to 5 and leave the decision to the AC.\"}", "{\"title\": \"Response to Reviewer mMXd (4/4): Corrections\", \"comment\": \"> (Moderate) The style file seems to not follow ICLR 2025 templates: the lines are not numbered.\\n\\nThank you for pointing out the formatting. We apologize for this oversight. We have since updated the manuscript to adhere strictly to the ICLR 2025 template.\\n\\n> (Minor) Typo(s): Figure 1 \\\"using a much larger base model then data selection model\\\": \\\"then\\\" should be \\\"than\\\".\\n\\nThanks again for the careful read. We have fixed this in our revised version (see line 235).\"}", "{\"title\": \"Response to Reviewer mMXd (3/4): Insights\", \"comment\": \"> The authors ... collect all the results without providing sufficient analyses.\\n\\nWe believe our paper goes beyond scaling experiments to provide substantial analyses, including:\\n\\n- **Parametric Modelling of Performance:** In Section 5, we introduce a parametric model (Equation 3) to quantify the relationship between computational investment and performance gains. We analyze the behavior of different data selection methods theoretically and fit this model to our empirical data.\\n- **Compute-Optimality Analysis:** We empirically evaluate the compute-optimality of various data selection methods across model sizes and tasks, finding that more sophisticated methods are often not compute-optimal under practical constraints.\\n- **Scaling Extrapolation Analysis:** In Section 7, we analyze how compute-optimal data selection methods vary with model size and compute budget. Our analysis shows that for the complex method to be compute-optimal, the train model size need to 10x the data selection model size.\\n- **Break-Even Analysis for Multiple Tasks:** We perform a break-even analysis to identify when the upfront cost of expensive data selection methods is justified over multiple tasks (Section 8 and Appendix H).\\n- **Data Similarity Analysis:** We examine the overlap between datasets selected by different methods (Section 8 and Appendix I) to compare data selected by different methods.\\n\\nIf you have specific suggestions for additional analyses, we are open to incorporating them.\\n\\n> The motivation for studying such a computation-constrained data selection problem is not fully supported. \\n\\n**The motivation for our study stems from a critical practical challenge in the deployment of LLMs: the computational cost of fine-tuning [Hu]**. In most practical settings, compute resources are the core constraint, and practitioners must make strategic decisions about how to allocate these resources effectively to optimize for performance.\\n\\nMotivated by this practical challenge, we study an important allocation problem: during fine-tuning, whether to allocate more compute to data selection or simply train on more raw data. We offer practical insights and advice based on our theoretical analysis and empirical findings.\\n\\nThis paper encourages the community to re-evaluate the compute efficiency of existing strategies and to develop more efficient methods.\\n\\n*References:*\\n\\n[Hu] Hu, Edward J., et al. \\\"Lora: Low-rank adaptation of large language models.\\\" *arXiv preprint arXiv:2106.09685* (2021).\"}", "{\"summary\": \"The authors present a study on compute constrained data selection for training large language models (LLMs).\\nUnlike preceding works, they do not constrain the size of the training set, but the compute, which is the sum of computational expenditure for data selection as well as LLM training. \\nThey compare various existing data selection methods under this setting and come to the conclusion that many powerful data selection methods are almost never compute-optimal due to their computational cost, making cheaper data selection the favorable choice.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The study is well motivated and its results are of practical importance for finetuning large language models.\", \"Empirical findings correspond well with theoretical framework\"], \"weaknesses\": [\"The title of the paper is rather broad, while the study is rather specific. \\\"Compute-Constrained Data Selection\\\" does not indicate which type of data is selected for which type of task.\"], \"minor_remarks\": [\"p.3 line 3: \\\\mathcal{S} \\\\subseteq \\\\mathcal{D}, as \\\\mathcal{X} is not introduced\", \"p. 6 bottom: methods -> method\"], \"questions\": [\"Regarding the notion of utility in Section 3: Is utility here something that is to be minimized, i.e. alternatives with lower utility are preferred over alternatives with higher utility? In the remainder of the paper (expected) model performance is considered for which clearly higher values are preferred.\", \"I am not sure whether I understood the greedy data selection introduced in Sections 3 and 4. I am under the impression that all data points are scored individually and afterwards they are ranked according to their scores and selected until budget K is exhausted. Isn't it neccesary to do this in an interleaved manner, in order to capture effects like redundancy and the submodularity of utility? Consider the extreme case in which the most informative data point x is repeated K in the dataset, then we would end up with a selection that contains the same datapoint K times.\", \"In Figure 2, the plot represents performance of Mid-PPL, while the plot in Figure 3 represents performance of Top-PPL. What is the reason for this discreapency?\", \"In Figure 2, what exactly is the dashed line? Shouldn't the Pareto front contain all solutions, that are dominating on the dimensions of compute (low) and accuracy (high)? The line is straight, is it something like a linear regression applied to the solutions on the Pareto front?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer 9BiK (4/4): Questions\", \"comment\": \"> The value of the strategies that depend on similarity of samples to validation samples worry me in that they seem very dependent on the size of the validation set, and that if the validation set is too small one might overfit. But perhaps it doesn't matter too much since you are always selecting some large number of training samples anyways, and so even if the validation set is as small as Paris (to use a 2D example), you still correctly pick the subset of training samples in Europe and dump the ones in the Americas, and that wouldn't have changed much if the validation set was all of France instead of just Paris. Be great to see some discussion & experiments about this, even if they are tiny, in this paper.\\n\\nThis is indeed a valid concern. Like the reviewer has mentioned, since we are selecting large samples from the training set, the precision required is not stringent. We provide the statistics of evaluation datasets in our experiments below. This follows the hypothesis; it appears as long as the validation set is more than a few dozen (>50), BM25 and Embed should work fine. \\n\\n| Dataset | # Shot | # Tasks | D_val | D_test | Answer Type |\\n| ------- | ------ | ------- | ----- | ------ | --------------- |\\n| MMLU | 5 | 57 | 285 | 18,721 | Letter options |\\n| BBH | 3 | 23 | 69 | 920 | COT and answer |\\n| IFEval | 1 | - | 50 | 541 | Open Generation |\\n\\nWe've since included the evaluation dataset statistics and discussion about the size of the validation set in the revised paper (see Appendix D.4).\"}", "{\"comment\": \"Thanks for the efforts in the rebuttal. My score remains the same!\"}", "{\"title\": \"Reviewer acknowledgement\", \"comment\": \"I have read the other reviews and the author response. I stand by the opinions expressed in my original review, and look forward to a spirited discussion with my colleagues about whether these studies are well-done and significant enough.\"}", "{\"title\": \"Response to Reviewer Z24H (1/2): The Tipping Point\", \"comment\": \"Thank you for the insighful comments and careful review. Responses are divided into sections.\\n\\n> Although the author claims some simple methods such as Lexicon outperform the complex ones such as Perplexity and Gradient, as shown in Figure 1, the complex ones perform quite well especially under medium and large budget situations. It would be more important to study the tipping point, where the performance gains plateau became flat. This is the place where further increases in computing resources yield diminishing returns.\\n\\nIf we unerstand the **tipping point** correctly\\u2014as the ratio between the training model size and the selection model size when complex methods like Perplexity and Gradient outperform cheaper methods like Lexicon\\u2014our empirical findings indicate that this occurs when training a 70B model using a 7B selection model. At this point, PPL and LESS outperform both BM25 and Embed across benchmarks, becoming compute-optimal for the first time. Generally, for PPL and Gradient to become compute-optimal, our extrapolation finds that the ratio between the train model size and selector model size should be greater than 5x and 10x, respectively.\"}", "{\"summary\": \"This paper studies a framework considering the practical challenges of training and fine-tuning large language models (LLMs) under computational constraints. It has established a trade-off between achieving better performance using larger data and lowering computational costs by selecting smaller subsets of data. A key takeaway is that simpler data selection methods, such as lexicon-based and embedding-based approaches, often provide a more efficient solution compared with more complex, compute-intensive methods like perplexity-based and gradient-based strategies.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"This paper addresses compute-efficient fine-tuning, which is an important task in training LLM. Extensive simulations are conducted to provide empirical evidence and support the framework.\", \"weaknesses\": \"1. Although the author claims some simple methods such as Lexicon outperform the complex ones such as Perplexity and Gradient, as shown in Figure 1, the complex ones perform quite well especially under medium and large budget situations. It would be more important to study the tipping point, where the performance gains plateau became flat. This is the place where further increases in computing resources yield diminishing returns.\\n2. It is not surprising to see the tradeoff between performance and data size. The conclusions in this paper are largely empirical and may not generalize well to other situations. The practical limit of parametric fit is limited, as it mainly fits observed data points without clear guidance on how to *extrapolate* results to new scenarios. For example, can the results from smaller models (e.g., 13B) be used to predict outcomes for larger models (e.g., 70B)? Can the parameters estimated from smaller models be reliably transferred for larger model? If practitioners need to run experiments on 70B models to obtain these insights and fit the parametric model, the results may not be useful.\", \"questions\": \"1. Could the author discuss a real-world scenario to demonstrate how the proposed methods could be applied to guide practitioners?\\n2. Are the studied methods sensitive to the choice of model architecture?\\n3. How do these methods scale with hardware improvements?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer Z24H (2/2): Generalization & Extrapolation\", \"comment\": \"> The conclusions in this paper are largely empirical and may not generalize well to other situations.\\n\\nWe would like to highlight the steps we've taken to ensure generalizability of our findings:\\n- We conducted extensive evaluations, over 600 training runs, across multiple model sizes\\u2014specifically 7B, 13B, and 70B parameters\\u2014and tasks, including MMLU, BBH, and IFEval, which assess various abilities of LLMs. \\n- We tested our approach on different model families, such as Llama3, and found similar trends as with the Llama2 models. \\n\\nNotably, many canonical works on scaling laws and compute-optimal training have also based their conclusions on empirical findings [Kaplan, Hoffmann, Muennighoff]. We provide new insights into the compute-optimality of data selection methods for LLM finetuning. Our work encourages the community to reconsider the compute efficiency of existing data selection strategies and motivates the development of new methods that are both effective and computationally efficient.\\n\\n> The practical limit of parametric fit. Extrapolation from smaller model results.\\n\\nThanks for pointing out the specific uses of parametric were not made clear. To clarify, we include new discussion, analysis, and figures for extrapolation (see Section 7 and Appendix G). \\n\\nWhile we cannot accurately predict the exact downstream performance, **the parametric fit obtained from smaller models allows us to predict the ratio of training model size to selector model size needed for compute-optimality.**\\n\\nAs an example, we use the parametric fit we obtained from the 7B-7B and 13B-7B (Train Model Size/Selector Model Size) MMLU run and analyze how much bigger the train model needs to be, under gradient data selection, to surpass the current finetune Pareto frontier. As shown in (https://ibb.co/jRXx2KD), the compute-optimal ratios we derive from both 7B and 13B are 10x and 5x, respectively, suggesting that gradient method becomes compute-optimal at about 70B parameter size when using a 7B model for data selection. This aligns with our empirical observations from the 70B experiments, showing that practitioners can predict tipping points using parametric fits obtained from 7B and 13B models.\\n\\n\\n\\n*References:*\\n\\n[Kaplan] Kaplan, Jared, et al. \\\"Scaling laws for neural language models.\\\" *arXiv preprint arXiv:2001.08361* (2020).\\n\\n[Hoffmann] Hoffmann, Jordan, et al. \\\"An empirical analysis of compute-optimal large language model training.\\\" *Advances in Neural Information Processing Systems* 35 (2022): 30016-30030.\\n\\n[Muennighoff] Muennighoff, Niklas, et al. \\\"Scaling data-constrained language models.\\\" *Advances in Neural Information Processing Systems* 36 (2023): 50358-50376.\"}", "{\"title\": \"Response to All Reviewers\", \"comment\": [\"We thank each reviewer for their thorough reading of our work and their thoughtful feedback. We have made the following updates to the paper:\", \"**Revised main paper:**\", \"**(Section 7)**: Added details on the fine-tuned Pareto efficient frontier and power-law fitting.\", \"**(Section 7)**: Added extrapolation from our fitted parametric functions to predict the compute-optimal ratio between training and selection model sizes.\", \"**Findings**: For perplexity data selection, compute-optimality occurs when the training model is about 5 times larger than the selection model (35B parameters). For gradient data selection, it occurs when the training model is about 10 times larger than the selection model (70B parameters).\", \"Rephrased sentences throughout and fixed typos.\", \"**Added additional empirical results:**\", \"**(Appendix F)**: Conducted 13B model experiments on target task IFEval, which evaluates the model's instruction following ability.\", \"**Findings:** We find that the results are consistent with our previous empirical findings: at medium budget (13B model scale), cheap lexicon-based methods (BM25) and embedding-based methods (Embed) outperform perplexity-based (PPL) and gradient-based methods (LESS).\", \"**Added appendix sections:**\", \"**(Appendix D.4)**: Provided details on evaluation datasets and a discussion about the size of the validation dataset.\", \"**(Appendix G)**: Included details, figures, and results on extrapolating from the parametric functions\", \"**Improved appendix sections:**\", \"**(Appendix H)**: Added additional break-even analysis for 7B and 13B model sizes on the target task IFEval.\", \"**(Appendix I)**: Included additional data similarity heatmaps for the target task IFEval.\", \"**Reviewers have asked about the practical motivations for the parametric fit**:\", \"**Choosing Data Selector Size.** Fits from smaller model experiments enable us to predict the train/selector model size ratio. For example, a 10x ratio derived from our 7B and 13B experiments suggests that gradient-based methods become compute-optimal at ~70B parameters when using a 7B selector model.\", \"**Choosing Number of Training Points.** Fits also indicate how many data points are needed for a data selection method to achieve near-optimal performance.\"]}", "{\"title\": \"Response to Reviewer mMXd (2/4): Soundness\", \"comment\": \"> The parametric model is selected to be an exponential distribution and the model is fitted to minimize the squared error, but the choice is never justified by any theoretical analysis or numerical results. The fitted curve is also not very convincing (e.g. Figure 3 and Figure 7)\\n\\nThanks for the comments, we cover our motivations in Section 5. Below, we outline the reasoning behind selecting an exponential function for our parametric model:\\n\\n- **Neural Scaling Law:** Scaling laws in LLMs suggest that an exponential increase in FLOPs leads to a linear decrease in loss [Kaplan]. This motivates the use of an exponential function. \\n- **Diminishing Returns in Data Utility:** We hypothesize that only a small subset of the data provides the most value for any given task, and that new data points will provide increasingly lower value. The diminishing returns are generally modeled using exponential decay or saturation functions [Muennighoff].\\n- **Convergence to an Upper Bound:** We assume that all data selection methods eventually reach the same upper-bound performance after sufficient compute. .\\n\\nUnder these considerations, the exponential saturation formulation we proposed effectively captures the characteristics we want to model: diminishing returns with increasing compute and asymptotic convergence to an upper performance limit. \\n\\nImportantly, we model performance using downstream task metrics (e.g., MMLU, BBH scores), which are non-linear and not directly proportional to perplexity. Due to this non-linearity, performance can exhibit sharper and less predictable changes as we scale compute [Schaeffer]. We report the average RMSE error of the fitting in the table below; given that the standard deviation of our results is within 0.5, we believe the error is minimal and the fit is reasonable.\\n\\n**MMLU Fit Losses (RMSE)**\\n\\n| Model Size | Random | BM25 | Embed | PPL | LESS | Average RMSE |\\n| ---------- | ------ | ------ | ------ | ------ | ------ | ------------ |\\n| 7B | 0.1208 | 0.1689 | 0.0991 | 0.3440 | 0.1245 | 0.1715 |\\n| 13B | 0.1534 | 0.2787 | 0.6562 | 0.2923 | 0.0428 | 0.2847 |\\n| 70B | 0.1696 | 0.2191 | 0.1933 | 0.1077 | 0.0734 | 0.1526 |\\n\\n**BBH Fit Losses (RMSE)**\\n\\n| Model Size | Random | BM25 | Embed | PPL | LESS | Average RMSE |\\n| ---------- | ------ | ------ | ------ | ------ | ------ | ------------ |\\n| 7B | 0.1870 | 0.2313 | 0.1214 | 0.2846 | 0.1243 | 0.1897 |\\n| 13B | 0.8600 | 0.1655 | 0.6348 | 0.3995 | 0.1441 | 0.4408 |\\n| 70B | 0.2511 | 0.1075 | 0.1273 | 0.2353 | 0.1956 | 0.1833 |\\n\\n> The Pareto frontier is never formally defined in this paper nor sufficiently discussed. \\n\\nIn our context, Pareto front contains all solutions, that are dominating on the dimensions of compute (x-axis) and performance (y-axis). These solutions represent the most efficient choices, providing the best possible performance for a given compute budget under specific data selection methods and training token lengths. Furthermore, we assume that the efficient computational frontier can be described by a power-law relationship between the compute budget and number of training tokens. A power law refers to a term of the form $a*log(x)+b$, where terms $a$ and $b$ are fitted. This form has been extensively referenced as a scaling law in [Kaplan]. We fit this power law to these efficient solutions, denoting the fine-tuned Pareto frontier. We've updated the paper to clearly define efficient Pareto frontier (see Section 7).\\n\\n> It's very hard for me to believe that the fitted Pareto curve is indeed Pareto optimal as the points in Figure 8 and Figure 10 exceed the Pareto frontier by a large margin. \\n\\n\\nWe apologize for the confusion. To clarify, the break-even analyses in Figures 8 and 10 are exactly performed to examine **how many tasks the gradient-based data selection methods must target to reduce selection costs enough to surpass the current compute-optimal Pareto Frontier**.\\n\\n> the fitted exponential curves surpass the fitted Pareto curve considerably in Figure 3\\n\\nThe fitted exponential curves are parametric models fitted to the performance data of individual data selection methods, while the Pareto frontier in our analysis is empirically derived. For clarity, we have revised and removed the empirical Pareto front from our parametric fit (see Figure 3).\\n\\n*References:*\\n\\n[Kaplan] Kaplan, Jared, et al. \\\"Scaling laws for neural language models.\\\" *arXiv preprint arXiv:2001.08361* (2020).\\n\\n[Schaeffer] Schaeffer, Rylan, Brando Miranda, and Sanmi Koyejo. \\\"Are emergent abilities of large language models a mirage?.\\\" *Advances in Neural Information Processing Systems* 36 (2024).\\n\\n[Muennighoff] Muennighoff, Niklas, et al. \\\"Scaling data-constrained language models.\\\" *Advances in Neural Information Processing Systems* 36 (2023): 50358-50376.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"Response to Reviewer 9BiK (3/4): Miscellaneous Corrections\", \"comment\": \"> MINOR BUT IMPORTANT QUIBBLES: Authors state too unequivocally: \\u201cin practice, the total compute budget is predetermined: the number of accelerators and their usage hours are allocated in advanced\\u201d. That certainly is NOT true in many large companies that are actively training and leading with LLMs. So please hedge and preface that sentence with \\u201cIn many cases,\\u201d.\\n\\nAgree. Fixed (see line 25).\\n\\n> This sentence didn\\u2019t make sense to me: \\u201cFor example work on parameter efficient fine-tuning, targets improving the memory-usage of this stage (Hu et al., 2021).\\u201d\\n\\nRephrased to \\\"For example, parameter-efficient finetuning methods like LoRA (Hu et al., 2021) aim to reduce memory usage during fine-tuning by updating only a small subset of the model's parameters.\\\" (see line 34-36).\\n\\n> TYPO \\u201ccreate an minimal\\u201d -> \\u201ca minimal\\u201d\\n\\nFixed (see line 38)\\n\\n> Citing Hart 1968 paper \\\"Condensed Nearest Neighbors\\\"\\n\\nThis is a great suggestion. Added the reference (see line 39).\\n\\n> TYPO \\u201cData selection takes the full training data as input and chooses a subset to the train\\u201d -> \\u201cto train\\u201d\\n\\nFixed (see line 84-85). \\n\\n> This sentence doesn\\u2019t quite parse, please re-write \\u201cInstruction-tuned models can handle a variety of possible inputs for downstream use cases as either classification or generative model\\\"\\n\\nRephrased to \\\"Instruction-tuned models can handle a variety of possible inputs and can be applied to downstream tasks requiring either classification or open generation\\\" (see line 100-101).\\n\\n> \\u201cAssuming we at minimal\\u201d -> \\u201cat minimum\\u201d\\n\\nFixed (see line 142-143).\\n\\n> In 4.1 you refer to Section 4.1, which is a bit weird and really you can just delete that whole sentence.\\n\\nGood catch! Deleted. \\n\\n> Do figure out how to bold the last column title C_forward in Table 1. It can be done (probably \\\\mathbf{}).\\n\\nFixed (see line 162-163).\\n\\n> TYPO: Fit of Compute-Performace Relationship -> Performance\\n\\nFixed (see line 414).\"}", "{\"summary\": \"This paper considers selecting data for finetuning LLMs under a computational budget. The computational cost is divided into two parts: 1) when using the validation set to evaluate the performance, the validation set will incur an initial cost; 2) training on each sample will cost a fixed amount of computation. The authors propose an exponential distribution model to fit the model performance v.s. the training costs for four types of data selection methods: lexicon-based, embedding-based, perplexity-based, and gradient-based. The paper consists of numerical experiments over several models and several tasks.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"1. This paper considers an interesting problem, data selection under computational constraints, and has interesting observations that the initial cost cannot be neglected when considering the computational budget.\", \"weaknesses\": \"1. (Major) Lack of novelty: although this paper proposes a framework for analyzing the computational cost of each data selection method, it does not provide any new techniques based on this framework. Furthermore, the key observation is not very surprising: the computational cost contains an initial cost when evaluating the validation set, thus the perplexity-based or the gradient-based is clearly not optimal under a limited compute budget.\\n2. (Major) Lack of soundness: a) the parametric model is selected to be an exponential distribution and the model is fitted to minimize the squared error, but the choice is never justified by any theoretical analysis or numerical results. The fitted curve is also not very convincing (e.g. Figure 3 and Figure 7). b) The Pareto frontier is never formally defined in this paper nor sufficiently discussed. It's very hard for me to believe that the fitted Pareto curve is indeed Pareto optimal as the points in Figure 8 and Figure 10 exceed the Pareto frontier by a large margin. Also, the fitted exponential curves surpass the fitted Pareto curve considerably in Figure 3, implying that the two curves even contradict each other.\\n3. (Moderate) Lack of insights: this paper is rather an experiment report than a well-motivated paper. The motivation for studying such a computation-constrained data selection problem is not fully supported. The authors just launch a bunch of models, adopt several tasks, and collect all the results without providing sufficient analyses.\\n4. (Moderate) The style file seems to not follow ICLR 2025 templates: the lines are not numbered.\\n5. (Minor) Typo(s):\\n Figure 1 \\\"using a much larger base model then data selection model\\\": \\\"then\\\" should be \\\"than\\\".\", \"questions\": \"See the weaknesses part.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer mMXd (1/4): Novelty\", \"comment\": \"Thank you for the constructive feedbacks and careful reviews. We provide point-to-point responses to your concern regarding novelty, soundness, and insights.\\n\\n> (Major) Lack of novelty: although this paper proposes a framework for analyzing the computational cost of each data selection method, it does not provide any new techniques based on this framework. Furthermore, the key observation is not very surprising: the computational cost contains an initial cost when evaluating the validation set, thus the perplexity-based or the gradient-based is clearly not optimal under a limited compute budget.\", \"we_believe_the_key_observation_is_more_nuanced\": \"- While each data selection method incurs an initial cost when evaluating the validation set, **it does not a priori imply that perplexity-based and gradient-based methods are inherently not compute-optimal.** In fact, these methods were developed to enhance training efficiency [LESS; PPL1; PPL2] and are often assumed to provide better performance per unit of compute due to their sophisticated use of model information.\\n- To investigate compute-optimal fine-tuning, we formalize this into compute-constrained data selection, showing that compute-optimal data selection is determined by two factors: the cost of the data selection method ($C$) and the rate at which it extracts information from the training dataset ($\\\\lambda$). **Thus, there is no single compute-optimal data selection across all compute budgets; different methods are optimal at different compute budgets.**\\n- Our empirical findings reveal that **powerful data selection methods** (perplexity-based and gradient-based) **are less compute-optimal for most practical compute budgets.** Simpler methods like lexicon-based (BM25) and embedding-based (Embed) approaches often outperform them in terms of compute efficiency.\\n\\nAs far as we are aware, this paper is the first to study the optimal scaling properties of data selection for LLM training and show that advanced data selection methods may not be as compute-optimal as they seem.\\n\\n*References:*\\n\\n[LESS]: Xia, Mengzhou, et al. \\\"Less: Selecting influential data for targeted instruction tuning.\\\" arXiv preprint arXiv:2402.04333 (2024).\\n\\n[PPL1]: Antonello et al. (2020). Selecting Informative Contexts Improves Language Model Finetuning. arXiv:2005.00175\\n\\n[PPL2]: Ankner et al. (2024). Perplexed by Perplexity: Perplexity-Based Data Pruning With Small Reference Models. arXiv:2405.20541\"}", "{\"summary\": \"Surveys and experimentally compares different data selection methods for LLM fine-tuning, and reasonably and quantitatively concludes that only rather cheap methods that choose train samples based on some cheap similarity to the validation samples are likely to be worthwhile, but depends (of course) on how much training computation you are going to run.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"4\", \"strengths\": \"I found this is an important experimental contribution for practitioners and academics alike, and is likely to be heavily cited in the future. While there will inevitably be some discussion of whether they compared to all the right and best methods, I think that's in the details: they compared good and sufficiently recent example methods from high level strategies and showed significant enough differences that seem endemic to these different strategies.\", \"weaknesses\": \"The weaknesses I detail below should all be corrected, but they are all minor, none of them individually or in total would be a good reason to reject the paper.\", \"section_3_problems\": \"\", \"at_the_beginning_of_section_3\": \"\\u201cOur goal is to find the optimal subset S \\u2286 X\\u201d pretty sure you mean subset S \\u2286 D there?\\n\\nI think you are implying that the train set is not necessarily IID with the validation set, but that the validation set is IID with the test set. All I see you say is that the validation set is \\u201ccorrelated\\u201d with the test set, which is a really weak and vague thing to say, but if that\\u2019s all you want to say, okay, but I will be curious if in your experiments you actually make the Val and Test sets IID. \\n\\nYou need to define that $T$ represents a training procedure, you just use it without defining it now. \\n\\n\\u201cBy ranking the data points\\u2026.\\u201d Given a large initial train set D, having to rank the datapoints at cost O(D log D) is not free, hope you guys are taking that into account. Of course, you might argue just touching all D samples is O(D), but that is less relevant if, say, we have an infinite generator of data D (e.g. a real-time reader of the datastream formerly known as Twitter) and an independent (non-ranking) decider of whether each incoming $x$ is worth training on, that is, we shouldn\\u2019t have to assume we need to sort at cost O(D log D).\\n\\n\\nI\\u2019m uncomfortable as a reader that in (2) you are still defining your objective in terms of the test set. I agree that\\u2019s the ultimate goal, but if you actually implemented (2) it assumes knowledge of the test set. By the time you get to (2), I expected you to have switched to the validation set in the stated objective, which is different than the final metric, which should of course than be on the test set.\", \"section_4_feedback\": \"You can cut some of the intro to Sec 4, but please add-in that Lexicon-based and Embedding-based are both strategies that try to select train samples that are similar to the validation samples, whereas Perplexity and Gradient solutions are optimizing for the effect on the model loss.\", \"section_5_feedback\": \"Why do you assume training on all x is equal? Is that really true (honest question)? My guess is yes due to the very beaurocratic nature of how these models are run, but that\\u2019s not always true of machine-learned models, for example, a classic decision tree is much faster to evaluate for some inputs than others (if it has leaves of varying depths). \\n\\nIn computing C(k), you sum over C_v(x), which I assume is for x \\\\in D? Please be explicit there about which x you are summing over. And I\\u2019m surprised that that cost does depend on $x$ Does C_v(x) really vary so much? Could that not just be \\\\|D\\\\| (= size of D) times some cost per training sample?\", \"random\": \"Really appreciate you comparing to just a random sample as a baseline.\", \"minor_but_important_quibbles\": \"\", \"authors_state_too_unequivocally\": \"\\u201cin practice, the total compute budget is\", \"predetermined\": \"the number of accelerators and their usage hours are allocated in advanced\\u201d. That certainly is NOT true in many large companies that are actively training and leading with LLMs. So please hedge and preface that sentence with \\u201cIn many cases,\\u201d.\\n\\n\\nThis sentence didn\\u2019t make sense to me:\\n\\u201cFor example work on parameter efficient fine-tuning, targets improving the\\nmemory-usage of this stage (Hu et al., 2021).\\u201d\\n\\nTYPO \\u201ccreate an minimal\\u201d -> \\u201ca minimal\\u201d\\n\\nSince you are citing John 1975, consider also citing the famous and foundational Hart 1968 paper on Condensed Nearest Neighbors, the canonical and classic approach for selecting data for training (summarized e.g. in wikipedia: https://en.wikipedia.org/wiki/K-nearest_neighbors_algorithm). However, nearest neighbors is such a different machine learning paradigm that if you don\\u2019t feel that adds value to this work, you can ignore this suggestion, but given the statement in your paper is \\u201cData\\nselection is a foundational approach in machine learning where the objective is to create an minimal\\ndataset from a collection of data\\u201d the Hart 1968 paper is exactly the classic paper to have done just that.\\n\\nTYPO \\u201cData selection takes the full training data as input and\\nchooses a subset to the train\\u201d -> \\u201cto train\\u201d\\n\\nThis sentence doesn\\u2019t quite parse, please re-write \\u201cInstruction-tuned models can handle a variety of possible inputs for downstream use cases as\\neither classification or generative model\\u201d\", \"this_sentence_needs_some_polishing_of_singular_vs_plural\": \"\\u201cTherefore\\nwhile instruction-tuning is not the direct focus of this work, it provide a real-world applications of\\ncompute-constrained data selection.\\u201d\\n\\n\\n\\u201cAssuming we at minimal\\u201d -> \\u201cat minimum\\u201d\\n\\nEquation (2) can be written all on one line, winning you a bit of precious space. \\n\\nIn 4.1 you refer to Section 4.1, which is a bit weird and really you can just delete that whole sentence. \\n\\nDo figure out how to bold the last column title C_forward in Table 1. It can be done (probably \\\\mathbf{}).\", \"typo\": \"Fit of Compute-Performace Relationship -> Performance\", \"questions\": \"The value of the strategies that depend on similarity of samples to validation samples worry me in that they seem very dependent on the size of the validation set, and that if the validation set is too small one might overfit. But perhaps it doesn't matter too much since you are always selecting some large number of training samples anyways, and so even if the validation set is as small as Paris (to use a 2D example), you still correctly pick the subset of training samples in Europe and dump the ones in the Americas, and that wouldn't have changed much if the validation set was all of France instead of just Paris. Be great to see some discussion & experiments about this, even if they are tiny, in this paper.\\n\\nSee also the weaknesses section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"No ethics concerns.\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer 9BiK (1/4): Section 3\", \"comment\": \"Thank you for a thorough, careful, and insightful review. Responses are divided into sections.\\n\\n> At the beginning of Section 3: \\u201cOur goal is to find the optimal subset S \\u2286 X\\u201d pretty sure you mean subset S \\u2286 D there?\\n\\nThank you for pointing that out. We have corrected this typo (see Section 3, line 112).\\n\\n> I think you are implying that the train set is not necessarily IID with the validation set, but that the validation set is IID with the test set. All I see you say is that the validation set is \\u201ccorrelated\\u201d with the test set, which is a really weak and vague thing to say, but if that\\u2019s all you want to say, okay, but I will be curious if in your experiments you actually make the Val and Test sets IID.\\n\\nYes, we meant to say that validation set is IID with the test set, and that the train set is correlated but not necessarily IID with the validation/test set. We have rephrased the description in our assumption (see Section 3, line 122-123).\\n\\n> \\u201cBy ranking the data points\\u2026.\\u201d Given a large initial train set D, having to rank the datapoints at cost O(D log D) is not free, hope you guys are taking that into account. Of course, you might argue just touching all D samples is O(D), but that is less relevant if, say, we have an infinite generator of data D (e.g. a real-time reader of the datastream formerly known as Twitter) and an independent (non-ranking) decider of whether each incoming x is worth training on, that is, we shouldn\\u2019t have to assume we need to sort at cost O(D log D).\\n\\nWe agree that sorting train set D cost at minimal O(D log D) after a data selection method has scored through the train set D. However, the cost to sort is minimal in comparison with the two computational bottlenecks: (1) cost of training LLMs (2) computing the utility function on D. For context, the FLOPs needed for a single forward pass on the smallest 7B model is about 4.69E+10 FLOPs per token, and finetuning for 10% of data (~about 10 Million tokens) in our settings requires 4.69E+18 FLOPs. In practice, the cost to sort poses no difference to our analysis.\"}", "{\"title\": \"Response to Reviewer 9BiK (2/4): Section 4 & 5\", \"comment\": \"> I\\u2019m uncomfortable as a reader that in (2) you are still defining your objective in terms of the test set. I agree that\\u2019s the ultimate goal, but if you actually implemented (2) it assumes knowledge of the test set. By the time you get to (2), I expected you to have switched to the validation set in the stated objective, which is different than the final metric, which should of course than be on the test set.\\n\\nWe agree with the sentiment that the objective should be defined in terms of the validation set instead of the test set. We\\u2019ve revised the objective function to align with practical implementation (see Section 4, line 145\\u2013147).\\n\\n> SECTION 4 FEEDBACK: You can cut some of the intro to Sec 4, but please add-in that Lexicon-based and Embedding-based are both strategies that try to select train samples that are similar to the validation samples, whereas Perplexity and Gradient solutions are optimizing for the effect on the model loss.\\n\\nYes, this makes categorization much cleaner. Added additional explanation \\\"While lexicon and embedding-based methods aim to select training samples similar to validation samples, perplexity and gradient-based methods focus on optimizing their effect on model loss. \\\" (see line 203-205)\\n\\n> SECTION 5 FEEDBACK: Why do you assume training on all x is equal? Is that really true (honest question)? My guess is yes due to the very beaurocratic nature of how these models are run, but that\\u2019s not always true of machine-learned models, for example, a classic decision tree is much faster to evaluate for some inputs than others (if it has leaves of varying depths).\\n\\nThis a good point, but it really is true in modern language models: they process inputs in batches and pad each sequence to a fixed length.\"}", "{\"title\": \"Response to Reviewer B9fL\", \"comment\": \"We thank the reviewer for the careful feedback. We answer specific questions below:\\n\\n**Maximize/Minimize Utility**\\n\\n> Regarding the notion of utility in Section 3: Is utility here something that is to be minimized, i.e. alternatives with lower utility are preferred over alternatives with higher utility? In the remainder of the paper (expected) model performance is considered for which clearly higher values are preferred.\\n\\nThank you for the careful read, and we are sorry for the confusion. The utility in Section 3 is intended to be something we maximize. We have since revised Section 3 (see line 113, 130).\\n\\n**Greedy Data Selection**\\n\\n> I am not sure whether I understood the greedy data selection introduced in Sections 3 and 4. I am under the impression that all data points are scored individually and afterwards they are ranked according to their scores and selected until budget K is exhausted. Isn't it neccesary to do this in an interleaved manner, in order to capture effects like redundancy and the submodularity of utility? Consider the extreme case in which the most informative data point x is repeated K in the dataset, then we would end up with a selection that contains the same datapoint K times.\\n\\nYou are correct that data selection scores all data points individually based on the utility function $v(x)$, rank them, and select the top K data points until the compute budget is exhausted.\", \"regarding_redundancy_and_submodularity\": \"- We assume that each data point $x$ is unique within the dataset $\\\\mathcal{D}$. In practice, the datasets used for fine-tuning LLMs are large and typically undergo preprocessing steps that remove exact duplicates [Wang]. Scenarios where the same data point is repeated $K$ times is negligible in our case.\\n- Submodularity decomposes the effects of a selected dataset $S$ into every data points $x \\\\in S$. This implies diminishing returns\\u2014adding a redundant or similar data point yields less additional utility [Kirchhoff]. This justifies data selection of scoring data points individually and greedily.\\n\\n\\n**Mid / Top PPL**\\n\\n> In Figure 2, the plot represents performance of Mid-PPL, while the plot in Figure 3 represents performance of Top-PPL. What is the reason for this discreapency?\\n\\nApologies for our mistake. To clarify, MMLU reports performances with Mid-PPL, while BBH uses Top-PPL. We report the best performance for each task from these methods. This has been revised (see line 316\\u2013318).\\n\\n**Pareto Frontier**\\n> In Figure 2, what exactly is the dashed line? Shouldn't the Pareto front contain all solutions, that are dominating on the dimensions of compute (low) and accuracy (high)? The line is straight, is it something like a linear regression applied to the solutions on the Pareto front?\\n\\nYes. In our context, Pareto front contains all solutions, that are dominating on the dimensions of compute (x-axis) and performance (y-axis). These solutions represent the most efficient choices, providing the best possible performance for a given compute budget under specific data selection methods and training token lengths. Furthermore, we assume that the efficient computational frontier can be described by a power-law relationship between the compute budget and number of training tokens. A power law refers to a term of the form $a*log(x)+b$, where terms $a$ and $b$ are fitted. This form have been extensively reference as a form of scaling law in [Kaplan]. We fit this power law to these efficient solution, denoting the fintuned Pareto frontier. This is represented as a line in linear-log space.\\n\\nWe have since updated the paper to clearly define this concept (see Section 7, line 320-324).\\n\\n*References:*\\n\\n[Kaplan]: Kaplan, Jared, et al. \\\"Scaling laws for neural language models.\\\" *arXiv preprint arXiv:2001.08361* (2020).\\n\\n[Wang] Wang, Yizhong, et al. \\\"How far can camels go? exploring the state of instruction tuning on open resources.\\\" Advances in Neural Information Processing Systems 36 (2023): 74764-74786.\\n\\n[Kirchhoff] Kirchhoff, Katrin, and Jeff Bilmes. \\\"Submodularity for data selection in machine translation.\\\" *Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)*. 2014.\"}", "{\"title\": \"Response to Reviewer B9fL - Further Discussion on Submodularity & Redundancy\", \"comment\": \"Thank you for the follow-up. We would like to clarify what we intended to say because we do not think we are in disagreement with your point.\\n\\nYou are correct that the data selection method proposed in Kirchhoff & Bilmes (2014) addresses redundancy, but also is quadratic and unlikely to be tractable in the LLM domain. They note as well that most approaches to this problem do not handle redundancy, stating most data selection methods \\u201cvalue each sentence individually, without regard to any interaction with already selected sentences. This approach, therefore, is **modular (a special case of submodular)** and values a set $X$ via $m(X)$ = $\\\\sum_{x\\\\in X}m(x)$\\u201d (Kirchhoff & Bilmes, Section 3). \\n\\nWe mention their work not because we claim to solve redundancy in this but because their use of (sub)modularity provides a general elegant framework for thinking about data selection. **In fact they also note that most practical data select methods rely on (sub)modularity, even when they don't consider redundancy**: \\u201cThe fact that submodularity is implicit and unintentionally used in previous work suggests that it is natural for this problem\\u201d (Kirchhoff & Bilmes).\\n\\nWe agree with you that the modular assumption is not ideal, but it is also the assumption made by all the previous methods benchmarked in this paper. Importantly, this is not an assumption made exclusively in our paper but is common in data selection methods for LLMs due to computational constraints. Given the complexity of these algorithms we do not attempt to handle the general case, but instead are focusing on the modular assumption. They note \\u201cthe threshold method (greedy algorithm) solves Eqn.2 (core-set selection problem) exactly. On the other hand, a modular function has no chance to represent interaction or redundancy between sentences\\u201d (Kirchhoff & Bilmes).\\n\\nThe reviewer acknowledges that our paper presents a unified framework that analyzes existing data selection approaches with a holistic focus on compute\\u2014this is the core contribution of our work. While the redundancy assumption is a valid concern, it is not our primary contribution due to practical constraints. We agree though that methods that account for redundancy are an important direction for future work and mentioned (Kirchoff & Bilmes) primarily because we also think handling this efficiently in the future is a critical area of research.\\n\\nWe will add a footnote and discussion urging future work to consider redundancy and submodular formulations.\"}" ] }
4dtwyV7XyW
Toward Principled Transformers for Knowledge Tracing
[ "Kai Neubauer", "Yannick Rudolph", "Ulf Brefeld" ]
Knowledge tracing aims to reason about changes in students' knowledge and to predict students' performance in educational learning settings. We propose knowledge tracing set transformers (KTSTs), a straightforward model class for knowledge tracing prediction tasks. This model class is conceptually simpler than previous state-of-the-art approaches, which are overly complex due to domain-inspired components, and which are in part based on suboptimal design choices and flawed evaluation. In contrast, for KTSTs we propose principled set representations of student interactions and a simplified variant of learnable modification of attention matrices for positional information in a student's learning history. While being largely domain-agnostic, the proposed model class thus accounts for characteristic traits of knowledge tracing tasks. In extensive empirical experiments on standardized benchmark datasets, KTSTs establish new state-of-the-art performance.
[ "educational data mining", "knowledge tracing", "transformer" ]
https://openreview.net/pdf?id=4dtwyV7XyW
https://openreview.net/forum?id=4dtwyV7XyW
ICLR.cc/2025/Conference
2025
{ "note_id": [ "m1eL1GbKmA", "krhh8sbQ0f", "kIpEQsris1", "evpMKEie2K", "ZyYzZPIeBg", "HqniUVLMnP", "0wqMPbTD6s" ], "note_type": [ "comment", "official_comment", "official_review", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1732612879988, 1732606376463, 1730652834252, 1732097378230, 1730726601475, 1730410445377, 1730212297052 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission6646/Authors" ], [ "ICLR.cc/2025/Conference/Submission6646/Reviewer_aeYy" ], [ "ICLR.cc/2025/Conference/Submission6646/Reviewer_aeYy" ], [ "ICLR.cc/2025/Conference/Submission6646/Authors" ], [ "ICLR.cc/2025/Conference/Submission6646/Reviewer_b2fc" ], [ "ICLR.cc/2025/Conference/Submission6646/Reviewer_eo5z" ], [ "ICLR.cc/2025/Conference/Submission6646/Reviewer_gSxz" ] ], "structured_content_str": [ "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\", \"comment\": \"Dear PC,\\n\\nWe are withdrawing the paper.\\nWe thank all reviewers for their time and for their comments.\\n\\nSincerly yours,\\nthe authors\"}", "{\"title\": \"Lowering the score\", \"comment\": \"Unfortunately the authors did not engage in any discussion regarding the reviews (neither mine or others'). Therefore I would decrease my score from Reject to Strong Reject.\"}", "{\"summary\": \"The paper tackles the task of knowledge tracing, which can sort of be summarized to predicting a binary correctness value for a response R to a question Q with certain knowledge components C (i.e concepts), conditioned on the previous questions (and their components and responses). The authors use the transformer architecture for this (which seem to also be explored in prior work) and propose a specific Multi-head Self-Attention block that is suited to the task. Specifically, it is argued that the the Knowledge Components should be parsed in a permutation-agnostic way. There are also other approaches such as simply taking the mean of all the components that are studied.\\n\\nThe experiments study quantitatively study such design choices and also explore other aspects of the data, such as the average number of components per question and how it relates to the used method.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"**(a)** The related work discussion is very thorough and, even though I'm not familiar with the literature, the authors do a great job in covering many relevant works on every single design choice. The presentation of such related work, however, needs some rework (see weakness **(a)** below)\\n\\n**(b)** The experiments seem to be very thorough (it is mentioned that there are multiple initialization and hyper-parameters tested for every baseline). The ablation is also well-done in terms of different design choices (cf. Table 3. with inputting different queries to the attention blocks or Table 2&3 (bottom) with different component aggregation strategies).\\n\\n**(c)** The numbers overall also seem promising. Although I'm confused whether the (mean) and (unique) lines from Tables 2 and 3 can be considered as their contribution? (see Weakness **(d)** below).\", \"weaknesses\": \"**(a)** I really appreciate the authors being thorough when discussing the related work, but I think there is a lot of room for improving the discussion. Specifically, the current version can sometimes be very confusing to someone who is not entirely familiar with the literature.\\n\\nFor example, take a look at Lines 108-136 (\\u2018Limitations of related work\\u2019). I don\\u2019t understand why this section has to come after the Problem Setting (Section 3) and why it is not already covered in the previous Related Work discussion (Section 2). I understand that the authors needed to rely on the defined notation (in Line 121), but most of the text could be merged in the previous discussion on the Related Work. You could also potentially move the problem setting to precede Section 2.\\n\\nThe related work discussion is again further spread to other sections, see the Lines 198-207 and Lines 252-259. This really makes it hard for me to assess where this paper stands in comparison to related work, as suddenly later it is revealed that there are other works who have also explored similar directions, which didn\\u2019t seem to be the case in Section 2. I urge the authors to significantly rework the related work discussion. For example, you could include a paragraph on all the transformer-based Knowledge Tracing methods and maybe also a paragraph on the works who explored different attention mechanisms. These should all be in the Section 2 (Related Work) and not interleaved in the method discussion!\", \"one_other_example_of_this_issue_would_be_line_286_288\": \"\\u201cWe propose three interaction embeddings\\u201d. And later in Line 307-208 two of which (`mean` and `unique`) turn out to be used in prior work and only the third one is claimed novel. Again, as a reader, it is very confusing where this paper stands.\\n\\n**(b)** The overall related work discussion can also sometimes be too abstract. For example, Lines 127-128 state \\u201cWithout proper masking \\u2026, this introduces label leakage\\u201d. This sentence for example is not clear what is meant by neither \\u201cproper masking\\u201d nor \\u201clabel leakage\\u201d. \\n\\n\\n**(c)** There are certain instances where the authors criticize prior work on weak grounds. For example, in Lines 200-202 it is stated that prior RNN-based work has certain inductive biases by having a hidden state associated with a student's knowledge, and later it is stated that the paper\\u2019s [transformer-based] approach is conceptually simpler. I cannot 100% agree with such a comparison. I don\\u2019t agree why having an inductive bias, as long as it\\u2019s general-enough, would be a downside and certainly don\\u2019t agree with transformers being \\u201cconceptually simpler\\u201d than an RNN. \\n\\n**(d)** Regarding the results, it seems that the most-relevant line in the tables is `KTST (MHSA)` as it is stated in line 309-310 that the `(mean)` and `(unique)` methods are from prior work. If that is the case, then the tables actually don't seem that promising since it's always under-performing prior work? I would like to ask the authors to clarify this.\\n\\n\\n----- Minor Issues -----\\n\\n**(e)** The very first paragraph in the introduction needs 1-2 more sentences to further clarify what the setting is. It should clarify that the context is student-computer interaction and this is supposed to be used for digital education.\\n\\n**(f)** Line 122-123, at the end, it is stated: \\u201cOne consequence is an increase \\u2026\\u201d. But later in the text there is no second consequence. \\n\\n**(g)** Currently Figure 2 is not very optimal, as it is basically the vanilla transformer architecture, except for X, Y, and Rs as inputs and outputs. I think the figure should be with further detail, demonstrating all the token types (e.g knowledge components and questions). A significant part of your method is also the aggregation, which again the figure is not explaining. The \\u201ccausal masking\\u201d should ideally also be demonstrated in the figure.\\n\\n**(h)** I think the Figure 1 is really not that informative. It's just with random numbers and shapes and it really doesn't describe the problem. I think something like the figures in this talk (https://youtu.be/8ITtYnhslvE?si=ExSW6WGShqNTTTiu&t=106) would be more informative to someone not familiar with this topic.\", \"questions\": \"**(a)** Regarding the expanded representation works, I wanted to ask the authors how it works for a query with more than one knowledge component. Specifically, let\\u2019s say for a query X_5, after passing all the interactions Y_1 to Y_4, how do we query X_5 if it has more than one knowledge component? If for example the query has 5 knowledge components, do we have to give 4 queries (with each knowledge component in a single query) and simply ignore the output until the last (5th) query is given?\\n\\n**(b)** Regarding the results, Lines 515-520, state that MHSA works better for larger data and larger Component-to-question ratio. But at the same time MHSA (compared to the mean method), introduces learnable parameters that require data. So I\\u2019m not 100% sure if the issue is really the ratio, or just the low amount of data in certain datasets? I would appreciate it if this can be disentangled (with an experiment).\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"1\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear PC,\\n\\nThe main arguments against the paper seem to deal with the perceived lack of technical novelty. By contrast, our paper does not primarily aim to introduce novel technical contributions. Instead, the paper aims to set things straight in an important interdisciplinary domain. We show that many of the previously introduced concepts used by state-of-the-art approaches are unnecessary for performance, lead to overly complex models, introduce distribution shift between training and testing, and have previously led to mistakes in empirical evaluation. Our contribution addresses the KT prediction problem with on par or better performance but in a technically straightforward, correct, and simple way.\\n\\nSincerely yours, \\nthe authors\"}", "{\"summary\": \"This paper presents knowledge tracing set transformers (KTSTs), a class of streamlined models designed specifically for knowledge tracing prediction tasks. To account for the unique characteristics of these tasks, this work introduces a simplified, learnable variant of the attention matrix and an interaction representation that does not rely on domain-specific knowledge. In experiments on standardized benchmark datasets, KTST achieves new state-of-the-art performance.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1.KTST demonstrates promising performance.\\n2.The authors conducted comprehensive comparative experiments and provided an in-depth analysis of the results.\\n3.The model diagram is clear and straightforward, enhancing readability and understanding.\", \"weaknesses\": \"1. The primary motivation is not clearly presented in the text, and the structure of sections reveals some issues in logical flow, with parts containing redundant explanations.\\n2. The expression \\u201cThis model class \\u2026 flawed evaluation\\u201d in the abstract is convoluted and unclear, making it difficult for readers to grasp the core motivation of the study.\\n3. The Introduction lacks an illustrative figure that directly presents the research problem, and the content in this section appears insufficient.\\n4. While I understand that the authors place part of the Related Work in Section 4 to emphasize their contributions, the extensive descriptions might raise questions regarding the sufficiency of the work\\u2019s original contributions.\\n5. Section 4.2 primarily introduces the learnable modification of attention matrices. Could you please explain how it differs from ALiBi [1]?\\n6. Section 4.3 mainly addresses the handling of multi-concept and identifies it as one of the paper\\u2019s research questions. As far as I know, related works [2,3,4] have largely resolved this issue, so what are the significant advantages of this approach?\\n7. The experiments in Section 5.3 introduce a randomly simulated dataset for multi-concept knowledge, aimed at comparing three embedding methods. In addition to random simulations, including a specially designed simulation method could make the comparisons more compelling.\\n\\n[1] Press O, Smith N A, Lewis M. Train short, test long: Attention with linear biases enables input length extrapolation[J]. arXiv preprint arXiv:2108.12409, 2021.\\n\\n[2] Long T, Liu Y, Shen J, et al. Tracing knowledge state with individual cognition and acquisition estimation[C]//Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval. 2021: 173-182.\\n\\n[3] Zhang M, Zhu X, Zhang C, et al. Multi-factors aware dual-attentional knowledge tracing[C]//Proceedings of the 30th ACM International Conference on Information & Knowledge Management. 2021: 2588-2597.\\n\\n[4] Cui J, Chen Z, Zhou A, et al. Fine-grained interaction modeling with multi-relational transformer for knowledge tracing[J]. ACM Transactions on Information Systems, 2023, 41(4): 1-26.\", \"questions\": \"Refer to the weakness above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper proposes Knowledge Tracing Set Transformers (KTSTs) for predicting student performance. Unlike domain-specific models, KTSTs use Transformer as a backbone and a learnable attention mechanism to handle student interaction data. KTSTs also learn set representations for knowledge components. The model outperforms or matches state-of-the-art results on multiple educational benchmarks.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"The paper is clearly presented, effectively clarifying both the motivation and model architecture. The Transformer-based model is also straightforward to understand and implement. Additionally, the evaluations are quite comprehensive, comparing performance across 22 benchmark models, which clearly demonstrates the advantages of KTST in predicting student performance.\", \"weaknesses\": [\"This work contrasts itself with models incorporating domain-specific knowledge (i.e., models with more interpretable components). However, the literature review overlooks a lot of newly published interpretable models. The focus of the review should shift towards domain-inspired and other Transformer-based models rather than general deep learning models. The authors should expand the review to include recent interpretable models and provide a comparative analysis.\", \"In Sections 2 and 4, the repeated claim that \\u201cIn contrast to KTSTs, most related work includes domain-inspired components that increase model complexity\\u201d lacks a clear comparison of these components and a quantification of the added complexity. Thus, I have the following questions and confusions:\", \"1. I am unclear on the criticism of existing domain-inspired models\", \"The inductive biases in architectures discussed in the paper, such as memory-augmented neural networks, question difficulty models, and graph neural networks for knowledge structures, are well-motivated within the educational domain. They reflect human learning processes, are generally beneficial in educational contexts and are not so specific as to be limited to particular subjects or cultures. From my perspective, the contribution of this work\\u2014specifically, the multi-head attention mechanism with learnable exponential decay on attention weights\\u2014is a re-formulation of embedding memory priors into the model. Similarly, the permutation invariance of concept representations functions as another form of regularization on the concept graph structure.\", \"Regarding \\u201cinteraction representations proposed in related work are often domain inspired and unnecessarily complex. \\u201c Could the authors provide concrete examples on the domain-inspired and unnecessarily complex embeddings in existing works?\", \"2. For the compared benchmarks, could the authors compare the complexity of these models quantitatively, including the training and evaluation time and the amount of model parameters?\", \"3. Domain-inspired models are generally motivated by their effectiveness with small datasets and their interpretability, rather than purely optimizing prediction performance. To evaluate this, I suggest two additional experiments:\", \"Could the authors conduct experiments on smaller datasets? Currently, models are trained on sequences of up to 200 consecutive interactions, which is extensive for educational data. Reducing the sequence length and the number of students would provide insight into model performance on limited data.\", \"Could the authors analyze the embeddings learned by the model, such as the representations for knowledge components and question embeddings? This would provide interpretability insights into the regularization of permutation invariance.\", \"I do appreciate the insight that concept representations should be permutation invariant. Could the authors include an ablation study to examine the impact of this design choice? Specifically: 1) test the model without enforcing permutation invariance on concept representations; 2) remove the knowledge component embeddings altogether and only keep the question embeddings.\", \"I find it challenging to pinpoint the technical contributions of this work. The learnable attention weights component (Section 4.2) seems primarily to add flexibility to existing domain-inspired representations. Additionally, although three choices for set representations are explored, the novel approach (MHSA) offers only marginal advantages over mean embedding when training sequences exceed 4,000 interactions and each question includes 6 knowledge components in synthetic data. This seems not that applicable to real-world datasets as shown in the experiments on KT data.\"], \"questions\": \"Please see weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper introduces Knowledge Tracing Set Transformers (KTSTs), a simpler, principled model class for knowledge tracing tasks that avoids complex, domain-specific designs by employing set representations of student interactions and a learnable attention modification for positional information.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"1\", \"strengths\": [\"This paper focuses on knowledge tracing, which is an important and interesting topic in the educational community.\", \"The writing is clear and the structure easy to follow.\", \"The authors compare a substantial number of baselines, which enhances the credibility of the experimental results.\"], \"weaknesses\": [\"The novelty of this paper may not fully meet the conference\\u2019s expectations. It appears to incorporate established methods to enhance model performance without sufficiently clarifying the research problem and motivation. The paper claims to improve ALiBi by making its matrix parameter learnable to provide positional information. However, a similar approach for learnable ALiBi matrix parameter was already proposed in [1]. How does this paper\\u2019s method differ from that approach?\", \"Although the authors propose ALiBi with learnable parameters to supply position information to attention and introduce aggregation functions, the ablation study focuses solely on ALiBi, which appears insufficiently comprehensive. It remains unclear to what extent each component affects model performance.\", \"Figure 3 shows little differences among various aggregation functions. It's unclear why simulated data is used instead of real data.\", \"The paper claims the proposed method is \\u201csimpler than previous state-of-the-art approaches,\\u201d yet does not provide relevant experiments, such as an analysis of parameter count or computational cost.\", \"Although significance testing is conducted, the performance improvement observed is modest. From a practical standpoint, it remains uncertain whether this advancement is substantial enough to significantly impact the field of knowledge tracing.\", \"Figure 2 lacks clarity and omits essential annotations. For instance, what are the specific inputs for Q, K, and V? What does the pink box in the lower left corner represent?\", \"There is no analysis of the learnable attention matrices to investigate what exactly influences model performance. I recommend adding experiments to enhance understanding.\", \"---\", \"[1] Chi, Ta-Chung, et al. \\\"Kerple: Kernelized relative positional embedding for length extrapolation.\\\" NeurIPS, 2022.\"], \"questions\": [\"What are the differences between the improved ALiBi in this paper and the method in [1]?\", \"What is the motivation for using ALiBi in knowledge tracing models?\", \"What are the main contributions this paper?\", \"Why is simulated data used instead of real data in Section 5.3?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
4dhTYe5pjD
Low Variance: A Bottleneck in Diffusion-Based Graph Imputation
[ "Daeho Um", "Sunoh Kim", "Jiwoong Park", "Jongin Lim", "Seong Jin Ahn", "Seulki Park" ]
In this paper, we tackle learning tasks on graphs with missing features, improving the applicability of graph neural networks to real-world graph-structured data. Existing imputation methods based upon graph diffusion produce channels that have nearly identical values within each channel, and these low-variance channels contribute very little to performance in graph learning tasks. To prevent diffusion-based imputation from producing low-variance channels, we introduce synthetic features that address the cause of the production, thereby increasing variance in low-variance channels. Since the synthetic features prevent diffusion-based imputation models from generating meaningless feature values shared across all nodes, our synthetic feature propagation design prevents significant performance degradation, even under extreme missing rates. Extensive experiments demonstrate the effectiveness of our scheme across various graph learning tasks with missing features, ranging from low to extremely high missing rates. Moreover, we provide empirical evidence and theoretical proof that validate the low-variance problem.
[ "diffusion-based imputation", "missing features", "graph neural networks" ]
https://openreview.net/pdf?id=4dhTYe5pjD
https://openreview.net/forum?id=4dhTYe5pjD
ICLR.cc/2025/Conference
2025
{ "note_id": [ "z0z97dYs8Y", "f5jjAPqOZQ", "Xorj2GGDHw", "EDeTjJ0CZR", "53X2GOraOc" ], "note_type": [ "official_review", "official_review", "official_review", "official_review", "comment" ], "note_created": [ 1730708770470, 1730857680557, 1730713525233, 1730746153699, 1731629995871 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission8202/Reviewer_Svxx" ], [ "ICLR.cc/2025/Conference/Submission8202/Reviewer_FV8o" ], [ "ICLR.cc/2025/Conference/Submission8202/Reviewer_A4gM" ], [ "ICLR.cc/2025/Conference/Submission8202/Reviewer_bu2p" ], [ "ICLR.cc/2025/Conference/Submission8202/Authors" ] ], "structured_content_str": [ "{\"summary\": \"The authors found the diffusion methods on graphs to impute missing data reinforce a low variance problem in feature channels, hindering the model performance. They proposed to inject random noise on such channels and re-diffuse on synthetic labels.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"1\", \"strengths\": \"The authors identified a low variance issue exacerbated during diffusion to fill in the missing values. Clearly such channels do not provide much information in many downstream tasks. They proposed to inject random noise and re-diffuse using a very similary method to PCFI, with a hyperparameter to allow the synthetic data to have wider range influence. The paper is easy to follow.\", \"weaknesses\": \"My concerns are mostly on the novelty of the paper.\\n\\nThe paper method description, analysis, and experimentation follow PCFI. The authors pointed out that low variance channels are causing issues in downstream task learning. Injecting random noise and then letting the feature diffuse from the \\u201cnoisy nodes\\u201d does increase the variance, help distinguish the nodes and allow some structural information encoded in the process. But the process does not seem to be much different from PCFI, the methods described in this feel more like an implementation variation/details in PCFI. \\n\\nAdditionally, there has been published work to address such issues, as the authors reviewed in 2.2, positional encoding etc.\", \"questions\": \"The authors argued in D3 that their method is better than positional encoding: in table 20 that under 99.5% missing rate, FIST outperforms positional encoding (node2vec). Can we get some sensitivity analysis on the other missing rate and different positional encoding techniques? Also, some of the experiment details are lacking.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper investigates the limitations of diffusion-based graph imputation methods, particularly focusing on the issue of \\u201clow variance\\u201d channels. These channels contain imputed features with nearly identical values across nodes, leading to limited information for graph learning tasks. To address this problem, the authors propose a novel imputation method called FISF (Feature Imputation with Synthetic Features). FISF injects synthetic features into low-variance channels, increasing their variance and enhancing the distinctiveness of node representations. The paper presents empirical evidence and theoretical proofs demonstrating the effectiveness of FISF in various graph learning tasks, including semi-supervised node classification and link prediction, with varying missing rates.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The paper is well-written and well-structured, with clear explanations and figures that facilitate understanding.\\n2. The paper presents a novel approach to address the low-variance problem in diffusion-based graph imputation, which has not been explored extensively in previous work.\\n3. The paper provides strong empirical evidence and theoretical proofs to support its claims, making the contribution robust and reliable.\\n4. The proposed method, FISF, demonstrates superior performance in various graph learning tasks, making it a valuable tool for researchers and practitioners working with graphs containing missing features.\", \"weaknesses\": \"1. The complexity of the proposed method is confusing. For example, why the complexity contains O(|$\\\\mathcal{E}$|) needs more clarification.\\n2. While the paper compares FISF with several existing methods, a more in-depth analysis and comparison with alternative methods, particularly in terms of scalability and computational efficiency, would strengthen the contribution. For example, authors can give running time comparisons on large graphs such as OGBN-Arxiv.\\n3. The performance discussion on heterophilic graphs is missing, while the competitor FP gives the analysis that diffusion based methods are not suitable for heterophilic graphs. The authors should clarify whether such a limitation still exists in FISF.\", \"questions\": \"1. In analysis, why does the authors say that the complexity of Dijkstra algorithm is O(n^2)? In fact, its complexity is O(nlogn) with a heap. Please clarify if a specific implementation of Dijkstra's algorithm is used that results in O(n^2) complexity, or if this is an error that needs correction?\\n2. How does the performance of FISF compare to other imputation methods in terms of scalability and computational efficiency?\\n3. Does the experimental dataset take the largest connected block? How does the method perform on non-fully connected datasets?\\n4. How does the performance of FISF on the heterophilic graphs?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"In this paper, they address learning tasks on graphs with missing features. Traditional graph diffusion-based imputation methods often yield low-variance channels with nearly identical values, which contribute minimally to graph learning performance. To counteract this, they introduce synthetic features that reduce low-variance production in diffusion-based imputation, thereby enhancing feature diversity. They provide both empirical and theoretical validation of the low-variance issue.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The proposed method demonstrates promising results across multiple datasets, including a real-world application, highlighting its practical effectiveness and versatility.\\n2. The method is supported by theoretical analysis, which strengthens the validity of the approach.\\n3. The application of diffusion-based techniques on graphs is intriguing\", \"weaknesses\": \"1. The motivation for this study requires further clarification, particularly in establishing a clear connection between missing features and their impact on graph learning performance. The logical link between the presence of missing features and the degradation in model performance is not thoroughly articulated.\\n\\n2. The problem setting requires further clarification. The term \\u201cmissing features\\u201d is too broad, as it could refer to missing graph structure or node features, each posing distinct challenges. It\\u2019s important to specify the type of missing data being addressed and to clearly illustrate the characteristics and implications of different types of feature missingness. A more precise explanation would help readers understand the unique challenges of the specific missing-feature scenario considered in this paper and how it influences the choice of methods.\\n\\n3.The scalability of the proposed method is not thoroughly discussed, particularly concerning large-scale graphs or graphs with extremely high missing feature rates.\", \"questions\": \"same as in weakness\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes a novel methodology for solving graph machine learning tasks with missing input node features. The main idea consists of three steps: 1) pre-diffusion; 2) identify low-variance feature channel for synthetic feature injection; 3) post injection diffusion. Experimental results demonstrate empirically the effectiveness of the proposed method.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1) The paper investigates an under-explored but practically important setting in graph machine learning.\\n\\n2) The technical framework is intuitive and simple to implement. It is also presented clearly with the necessarily detail. \\n\\n3) Experimental results and analyses in Appendix are comprehensive in providing evidence the proposed method works well in practice.\", \"weaknesses\": \"1) The proposed method, while intuitive, lacks sufficient theoretical justification. It is not entirely clear why injecting random features and re-run diffusion would help, apart from mechanistically forcing features not to converge to uniform values. It would be good if the authors can provide more theoretical investigation of the proposed method, perhaps from the viewpoint of expressivity or spectral analysis. At the moment, the theoretical contribution appears limited in my view.\\n\\n2) Some modelling choices seem ad-hoc and need more justification and validation (see Questions below).\\n\\n3) Empirical experiments seem to only focus on datasets where reasonably high homophily in node labels. This somewhat limits the understanding of effectiveness of the proposed method. It would be good to see the method tested against baselines in low homophily settings.\", \"questions\": \"1) In my view, the notion of low-variance channels need to be discussed in more depth. In one sense, low variance is not necessarily an issue, as it depends on the nature of the task - for node classification, if there is high homophily in labels, low variance is not necessarily bad. In other words, how low is \\u201clow variance\\u201d should perhaps be explained more clearly.\\n\\n2) It looks to me the pre-diffusion step is critical in determining which channels have low variance. Is it always best to allow the diffusion to (nearly) converge or we control this in a more adaptive fashion? \\n\\n3) Why choosing only one node to inject synthetic feature? Can it be selected in a more informative way than randomly? Also, what\\u2019s the impact of r (number of channels to inject synthetic features)?\\n\\n4) Can the proposed method be tested on datasets with low label homophily? In my view this is when the proposed method might show clearest advantages over baselines such as FP.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}" ] }
4dHyH42ha7
4DEditPro: Progressively Editing 4D Scenes from Monocular Videos with Text Prompts
[ "Jingyi Pan", "Qiong Luo" ]
Editing 4D scenes using text prompts is a novel task made possible by advances in text-to-image diffusion models and differentiable scene representations. However, conventional approaches typically use multi-view images or videos with camera poses as input, which causes inconsistencies when editing monocular videos due to the reliance of these tools on iteratively per-image editing and the absence of multi-view supervision. Furthermore, these techniques usually require external Structure-from-Motion (SfM) libraries for camera pose estimation, which can be impractical for casual monocular videos. To tackle these hurdles, we present 4DEditPro, a novel framework that enables consistent 4D scene editing on casual monocular videos with text prompts. In our 4DEditPro, the Temporally Propagated Editing (TPE) module guides the diffusion model to ensure temporal coherence across all input frames in scene editing. Furthermore, the Spatially Propagated Editing (SPE) module in 4DEditPro introduces auxiliary novel views near the camera trajectory to enhance the spatial consistency of edited scenes. 4DEditPro employs a pose-free 4D Gaussian Splatting (4DGS) approach for reconstructing dynamic scenes from monocular videos, which progressively recovers relative camera poses, reconstructs the scene, and facilitates scene editing. We have conducted extensive experiments to demonstrate the effectiveness of our approach, including both quantitative measures and user studies.
[ "4D scene editing", "Diffusion model", "4D Gaussian representation" ]
https://openreview.net/pdf?id=4dHyH42ha7
https://openreview.net/forum?id=4dHyH42ha7
ICLR.cc/2025/Conference
2025
{ "note_id": [ "kplmQodbil", "a5RfXAEFZe", "Mr9qobymT8", "8C8xxOFmdA", "6Tu9dEmDeN" ], "note_type": [ "official_review", "official_review", "official_review", "comment", "official_review" ], "note_created": [ 1730361310654, 1730701618387, 1730708350085, 1731464154514, 1730798981467 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission1752/Reviewer_jobR" ], [ "ICLR.cc/2025/Conference/Submission1752/Reviewer_GEno" ], [ "ICLR.cc/2025/Conference/Submission1752/Reviewer_zW35" ], [ "ICLR.cc/2025/Conference/Submission1752/Authors" ], [ "ICLR.cc/2025/Conference/Submission1752/Reviewer_63VK" ] ], "structured_content_str": [ "{\"summary\": \"The paper propose a framework for 4D scene editing in casual monocular video using text prompt. Unlike traditional methods that require multi-view images or camera poses, 4DEditPro works without external tools by using two key modules: TPE for maintaining coherence across frames and SPE for enhancing spatial consistency. A pose-free 4D Gaussian Splatting(4DGS) approach further enables scene reconstruction and editing without pre-calculated poses. Experiments demonstrate the result of this 4DEditPro through both qualitative and quantitative results, as well as user evaluations.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper is in general well organized and easy to follow.\\n2. The paper presents a pipeline for performing editing 4D from monocular video using Gaussian Splatting.\", \"weaknesses\": \"1. The key contribution of the proposed method (TPE, SPE) appears to lie in the integration of several minor techniques, such as feature extraction and injection into the video diffusion model. The novelty seems lacking, without too much novel insight.\\n\\n2. The advantages of TPE are not clear. Comparative experiments with previous off-the-shelf video editing models(e.g., TokenFlow[1], Fatezero[2], Flatten[3]) would be essential to demonstrate TPE\\u2019s advantages. However, this paper only includes comparisons with 3D editing models like GSEditor-4D.\\n\\n3. The pose changes in the novel view synthesis from the monocular video in the demo video appear too subtle. These result seems closer to video editing and falls somewhat short of being considered 4D editing. A detailed disclosure of how the authors set the poses in this experiment with monocular video would enhance the understanding of this paper\\u2019s strengths.\\n\\n4. Additionally, while the paper claims to achieve 4D editing, no experiments exist on 4D datasets. Using representative 4D datasets like DyNeRF[4] and HyperNeRF[5], as well as comparisons with other 4D editing models (e.g.,Instruct 4D-to-4D[6]), would make the paper\\u2019s argument more persuasive.\\n\\n[1] Geyer, Michal, et al. \\\"Tokenflow: Consistent diffusion features for consistent video editing.\\\" arXiv preprint arXiv:2307.10373 (2023). \\n[2] Qi, Chenyang, et al. \\\"Fatezero: Fusing attentions for zero-shot text-based video editing.\\\" Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023. \\n[3] Cong, Yuren, et al. \\\"Flatten: optical flow-guided attention for consistent text-to-video editing.\\\" arXiv preprint arXiv:2310.05922 (2023). \\n[4] Li, Tianye, et al. \\\"Neural 3d video synthesis from multi-view video.\\\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022. \\n[5] Park, Keunhong, et al. \\\"Hypernerf: A higher-dimensional representation for topologically varying neural radiance fields.\\\" arXiv preprint arXiv:2106.13228 (2021). \\n[6] Mou, Linzhan, et al. \\\"Instruct 4D-to-4D: Editing 4D Scenes as Pseudo-3D Scenes Using 2D Diffusion.\\\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024.\", \"questions\": \"1. The paper claims to achieve 4D scene editing from monocular video, but isn\\u2019t this simply a combination of monocular video editing and 4D reconstruction from monocular video? I\\u2019m uncertain why this qualifies as 4D editing.\\n\\n2. The explanation for setting the camera pose using Slerp is unclear. A more detailed clarification on this aspect would be helpful. Additionally, it would strengthen the paper to demonstrate how robustly the 4D reconstruction handles variations in camera poses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper studies instruction-guided 4D scene editing. It proposes 4DEditPro, a method that uses two modules, TPE and SPE, to ensure the temporal and spatial consistency, and uses a pose-free 4DGS to reconstruct 4D scene from each viewpoint's videos. The proposed method can perform well in different 4D editing tasks in the evaluation.\", \"soundness\": \"1\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The paper is overall written clearly, with clear formulas and descriptions.\", \"In various editing tasks, the proposed method outperforms the baselines both quantitatively and qualitatively.\", \"Ablation studies are provided to show the effectiveness of components.\"], \"weaknesses\": [\"The reasonability of task \\\"4D scene editing using only casual videos\\\" is questionable.\", \"The 4D scene editing's input is a 4D _scene_, which is like a 3D model in Blender but is dynamic, that should be able to be put in any coordinates and render any videos accordingly.\", \"The conversion between causal videos and 4D scenes is closer to 4D scene _reconstruction_ than editing. This should not be regarded as challenging in 4D editing.\", \"Therefore, the challenges that this paper aims to solve are far-fetched - they are brought from another task (i.e., 4D scene reconstruction) to obtain the input of the current task (i.e. 4D scene editing), but not a part of the current task with a valid input.\", \"In fact, lots of the contents of the paper just aim to do reconstruction, e.g., in Sec 3.2. This part seems quite orthogonal to the editing part.\", \"The model seems only working on monocular video, i.e. there is only one camera in the scene.\", \"This significantly reduces the challenge of spatial 3D consistency. This might be the reason why a depth estimator (L335) can easily reconstruct the 3D structure.\", \"When there is only one monocular video, the editing task then degrades to \\\"video editing with 3D consistency requirements.\\\" Therefore, video editing methods should be compared. However, they are not.\", \"According to the demo video, all the scenes are monocular. This necessitates the comparison against video editing models.\", \"The only baseline \\\"Instruct 4D-to-4D\\\" is not compared with. This is the only baseline that works in this task. It is crucial to compare with it.\", \"The authors claimed that \\\"Instruct 4D-to-4D\\\"'s code is not publicly available. However, according to the Github repo of \\\"Instruct 4D-to-4D\\\", the code was released on 8/29, which is one month before the deadline of ICLR. This method should have, therefore, been compared against the paper.\", \"Even if the code is not released, all the dataset used by Instruct 4D-to-4D are all public. Therefore, a comparison against Instruct 4D-to-4D should still have been achieved with those datasets and the same editing tasks as Instruct 4D-to-4D.\", \"In Tab.1, only the \\\"Average\\\" row is marked bold on the best numbers. Other rows should also be marked (and it seems that \\\"Ours\\\" are always the best, so this should improve the soundness).\", \"The model needs the user to provide descriptions of both original and edited scenes, which requires more human work. The baselines IN2N, GSEditor-4D, and Instruct 4D-to-4D only require editing instruction.\", \"The only 4D scenes used for editing are just three monocular dynamic scenes.\", \"As a comparison, the baseline Instruct 4D-to-4D compares with at least 3 monocular dynamic scenes and 5 multi-view dynamic scenes, covering DyCheck, DyNeRF, Google Immersive, etc, and as long as 300 frames.\", \"Therefore, this paper's comparison experiments are significantly weaker and more incomplete than the baseline.\"], \"questions\": [\"Following the weaknesses, please consider:\", \"Compare with video editing methods in spatial quality.\", \"Compare with baseline Instruct 4D-to-4D with its already released code.\", \"Compare with baseline Instruct 4D-to-4D by using the datasets it is using, i.e., DyCheck, DyNeRF, and Google Immersive, with the corresponding tasks.\", \"Evaluate the method on more multi-view scene datasets.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper presents 4DEditPro, a new framework for editing 4D scenes in casual monocular videos using text prompts. Unlike conventional methods that require multi-view images or known camera poses, 4DEditPro works with single-view videos, allowing easy and consistent scene edits without extra setup. It achieves this by combining two modules: Temporally Propagated Editing (TPE) for smooth, time-consistent edits across frames. Spatially Propagated Editing (SPE) for spatial consistency by generating nearby \\u201cvirtual views\\u201d to fill in missing details. Using a pose-free 4D Gaussian Splatting (4DGS) technique, 4DEditPro reconstructs scenes without needing camera poses, enabling flexible, high-quality editing. The approach is effective for both targeted edits and broader style changes, making text-driven video editing practical and seamless.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The task itself is valuable and timely, addressing the growing need for efficient 4D scene editing in casual videos.\\n2. The paper is well-structured and clearly written, making it easy for readers to understand the methodology and approach.\\n3. The experiments are thorough and appear to be well-designed, with enough detail provided to ensure reproducibility by others in the field.\", \"weaknesses\": \"1. The process for selecting the reference token lacks detail, and it\\u2019s unclear how this selection impacts the final results.\\n2. The pipeline doesn\\u2019t present any particularly innovative insights.\\n3. mThe editing results seem somewhat imprecise; for example, in Figure 4, the \\\"silver\\\" and \\\"night\\\" edits appear unnatural.\", \"questions\": \"Is there a dynamic demo available to assess the quality of the scene dynamics? I'd be interested in increasing my rating after seeing more extensive examples.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"This paper introduces a framework for 4D scene editing from monocular videos guided by text prompts. The proposed techniques, Temporally Propagated Editing (TPE) and Spatially Propagated Editing (SPE), ensure temporal and spatial consistency in the editing process. By introducing progressive dynamic representation through 4DGS, the framework can model scene attributes without requiring camera pose as an input.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. First approach for 4D scene editing from casual monocular videos, eliminating the need for camera pose input.\\n2. Introduced Temporally Propagated Editing (TPE) and Spatially Propagated Editing (SPE) modules to improve temporal and spatial consistency.\\n3. Quantitative evaluations show better performance over baselines, indicating the proposed method\\u2019s effectiveness across multiple metrics.\", \"weaknesses\": \"1. Temporal consistency is not maintained. In the supplementary video, noticeable flickering occurs in several segments, such as the sailing boat (00:26\\u201300:28), Minecraft scene (00:45\\u201300:47), horse editing (00:48\\u201300:51), and statue editing (00:52\\u201300:56).\\n2. The synthesized novel views show minimal differentiation from the original video, as seen in segments 00:22\\u201300:23 and 1:08\\u20131:12.\\n3. Furthermore, the supplementary video primarily demonstrates static view synthesis, despite the method being proposed for 4D editing.\\n4. The editing results showcased in the supplementary materials are mostly focused on color, style, and texture adjustments, with minimal instances of object shape editing. This suggests the method\\u2019s contributions in editing might be overstated.\\n5. In terms of comparisons, the paper primarily contrasts its approach with static 3D scene editing methods, even though it claims to support 4D editing. Given that the showcased editing focuses on color, style, and texture modifications, a more fitting baseline would involve applying a video style transfer technique to the input video, followed by reconstructing the 4D scene using methods designed for monocular videos.\", \"questions\": \"1. What factors contribute to the suboptimal performance of Gaussian Editor results? Given that scenes in Tanks and Temples and SemanticKITTI datasets are static (lacking moving objects), would it not be more appropriate to compare with the 3D version of Gaussian Editor? Furthermore, when applied to static scenes, do the results of the 3D and 4D versions of Gaussian Editor differ, or are they effectively the same?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}" ] }
4dAhjhm2Mm
A Score-Based Density Formula, with Applications in Diffusion Generative Models
[ "Gen Li", "Yuling Yan" ]
Score-based generative models (SGMs) have revolutionized the field of generative modeling, achieving unprecedented success in generating realistic and diverse content. Despite empirical advances, the theoretical basis for why optimizing the evidence lower bound (ELBO) on the log-likelihood is effective for training diffusion generative models, such as DDPMs, remains largely unexplored. In this paper, we address this question by establishing a density formula for a continuous-time diffusion process, which can be viewed as the continuous-time limit of the forward process in an SGM. This formula reveals the connection between the target density and the score function associated with each step of the forward process. Building on this, we demonstrate that the minimizer of the optimization objective for training DDPMs nearly coincides with that of the true objective, providing a theoretical foundation for optimizing DDPMs using the ELBO. Furthermore, we offer new insights into the role of score-matching regularization in training GANs, the use of ELBO in diffusion classifiers, and the recently proposed diffusion loss.
[ "score-based density formula", "score-based generative model", "evidence lower bound", "denoising diffusion probabilistic model" ]
https://openreview.net/pdf?id=4dAhjhm2Mm
https://openreview.net/forum?id=4dAhjhm2Mm
ICLR.cc/2025/Conference
2025
{ "note_id": [ "v5ghHMrPQn", "rse7rIDkLs", "kbHM69brUl", "ZzB1VDit4w", "Z5dxLA9Am1", "WqoBUaW3EE", "DccmlgP58q", "67uUnYEsIG", "4GwVhQeZjH" ], "note_type": [ "official_comment", "official_review", "official_review", "official_review", "official_review", "official_comment", "comment", "official_comment", "official_comment" ], "note_created": [ 1733141552846, 1730715904232, 1730689235128, 1730639252115, 1730043190131, 1733141543660, 1733141584520, 1733141561830, 1733141572573 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission3492/Authors" ], [ "ICLR.cc/2025/Conference/Submission3492/Reviewer_WunF" ], [ "ICLR.cc/2025/Conference/Submission3492/Reviewer_SVbJ" ], [ "ICLR.cc/2025/Conference/Submission3492/Reviewer_dKZW" ], [ "ICLR.cc/2025/Conference/Submission3492/Reviewer_kQtt" ], [ "ICLR.cc/2025/Conference/Submission3492/Authors" ], [ "ICLR.cc/2025/Conference/Submission3492/Authors" ], [ "ICLR.cc/2025/Conference/Submission3492/Authors" ], [ "ICLR.cc/2025/Conference/Submission3492/Authors" ] ], "structured_content_str": [ "{\"comment\": \"We sincerely thank you for the time and effort you invested in reviewing our submission. We greatly appreciate your feedback and suggestions, which have highlighted several areas for improvement in our work. We have decided to withdraw our submission, which allows us to address the concerns raised and improve the manuscript.\"}", "{\"summary\": \"This paper considers a density formula based on score estimation to analyze Denoising Diffusion Probabilistic Models (DDPMs). Using this formula, the authors provide a theoretical basis for why optimizing the evidence lower bound (ELBO) serves as an effective approach for training these models. The paper addresses the problem of the understanding of ELBO optimization for diffusion models, adding theoretical context to a widely used empirical technique.\\n\\nThe analysis extends to practical implications across different generative modeling contexts, including applications to GAN regularization, diffusion classifiers, and autoregressive models. By investigating these areas, the authors demonstrate how insights from the density formula can support training and optimization practices in various generative frameworks. This broad applicability suggests that the theoretical findings may be interesting to both foundational research and practical applications in generative modeling.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"The paper focuses on the theoretical understanding of score-based generative models (SGMs) by applying a density formula to explain why optimizing the evidence lower bound (ELBO) effectively supports training for diffusion models like DDPMs. By investigating the theoretical aspects behind ELBO optimization, the authors promote a more rigorous basis for diffusion models.\\n\\nAdditionally, the paper extends the implications of this analysis to areas such as GAN regularization, diffusion classifiers, and autoregressive models, illustrating the potential for these findings to enhance model training practices across various generative frameworks\", \"weaknesses\": \"The biggest weakness of the work lies in the lack of novelty and the positioning with respect to the literature.\\nIn particular, it is not evident how the work differs from known results such as [1], where Thm 4+ eq (25) and the comment in eq(29) seems to provide a result which is even more general than the one discussed by the authors. It is worth mentioning that related results which the authors do not cite in their work are also presented in [2], in particular Thm 3.\\n\\n\\nOne other big limitation is that the authors derive their connection between continuous and discrete time (with an **approximated score**) by a sequence of approximations, without discussing properly the impact of these. A clear quantification analysis would greatly strengthen the paper. \\n\\n\\n\\n[1] Huang et al., A Variational Perspective on Diffusion-Based Generative Models and Score Matching, (NeurIPS 2021)\\n\\n[2] Song et al., Maximum Likelihood Training of Score-Based Diffusion Models (NeurIPS 2021)\", \"questions\": \"Please refer to weaknesses section. Also, why do the authors consider the particular SDE in lines 76-77 and not a more generic one?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"Despite empirical advances, the theoretical foundation for why optimizing the evidence lower bound (ELBO) on the log-likelihood is effective for training diffusion generative models, such as DDPMs, remains largely unexplored. The authors proposed to address this question by establishing a density formula for a continuous-time diffusion process, which can be viewed as the continuous-time\\nlimit of the forward process in an SGM. The formula shows that the variational gap is negligible.\", \"soundness\": \"4\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The analysis of the variational gap for the continuous-time approximation of the discrete alternative of diffusion models, such as DDPM, is missing. The authors conducted clear and solid derivations to show why the variational gap is negligible, which provides proof for the empirical usage that the ELBO matches the true objectives.\", \"weaknesses\": \"1. despite the soundness of the derivations, I found that the goal of this paper is not very interesting.\\n\\n2. insights on GANs are not clear to me (a layperson in GAN).\", \"questions\": \"your SDE $d X_t = - \\\\frac{1}{2(1-t)} X_t d t + \\\\frac{1}{\\\\sqrt{1-t}} d B_t $ is a special case in Song's SDE in [1], which follows that\\n\\n$$d X_t = -\\\\frac{1}{2} \\\\beta_t X_t dt + \\\\sqrt{\\\\beta_t} d B_t$$\\n\\nIt appears that there are countless choices of $\\\\beta_t$. Why do you claim $\\\\beta_t = \\\\frac{1}{1-t}$ is the continuous-time limit of the aforementioned forward process in section 2.1 and is more preferred than Song's linear version $\\\\beta_t = \\\\beta_{\\\\min} + t (\\\\beta_{\\\\max} - \\\\beta_{\\\\min})$? \\n\\n[1] Score-Based Generative Modeling through Stochastic Differential Equations. ICLR'21.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper derives a density formula for a continuous-time diffusion process, which can be viewed as the continuous-time limit of the forward process of an SGM. The formula relates the target density and the score functions at different time steps. The authors use the formula to show that maximizing the ELBO in DDPM is approximately equivalent to minimizing the KL divergence of the target distribution and the learned distribution. The authors also apply the approximation to explain the use of score-matching regularization in GAN training, ELBO in diffusion classifier, and diffusion loss is autoregressive models.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The paper provides a formula relating the target density and the scores at different time steps.\\n\\n2. The paper shows that maximizing the ELBO in DDPM is approximately equivalent to minimizing the KL divergence of the target distribution and the learned distribution.\", \"weaknesses\": \"1. The contribution is unclear. The discussion about some important existing results are missing.\\n\\n2. The applications of the density formula presented here involves approximations, but there is no characterization of the approximation errors. \\n\\n3. The presentation needs improvement.\", \"questions\": \"1. Several existing works also explore the relationship between the density and the optimization objectives of diffusion models, e.g. [1] and [2]. What's the relation between the current results and those in existing works.\\n\\n [1] Kong et al. \\\"Information Theoretic Diffusion.\\\"\\n\\n [2] Song et al. \\\"Maximum likelihood training of score-based diffusion models.\\\" \\n\\n2. In general variational inference, for fixed observations, maximizing the ELBO is equivalent to minimizing the KL divergence. What's new in the current results compared to the general observation? \\n\\n3. Are there error bounds for the various approximations? Without such bounds, why do we expect the interpretation using approximations to be better and more useful than the interpretation using lower bounds? \\n\\n4. Existing theoretical results have provided error bounds for KL divergence. What's the connection between those results and the current results? What additional insights can the current results bring?\\n\\n5. What's the advantage of the SDE in (2.4) over the more commonly used O-U process?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This work is presented as a theoretical contribution to the field of diffusion models.\", \"the_overall_structure_of_the_paper_mimics_the_intended_objective_of_this_work\": \"to revisit both continuous-time and discrete-time diffusion models to arrive at the (exact and approximate) definition of a density expression for the log data density (in continuous and discrete time), that is used to: i) discuss the validity of an ELBO formulation for the optimization of the parameters of the denoising network of discrete diffusion models, ii) understand the optimization objective in generative adversarial networks, iii) provide a justification for classifier-based guidance in diffusion models, and iv) show that the diffusion loss used in autoregressive models corresponds to an approximate maximum likelihood solution.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": [\"It is important to revisit known results that might have been obtained through carefully engineered heuristics, through the lenses of a sound theoretical formalism, such that the community can validate existing choices. The endeavor of this work is in line with this objective, which I think is valuable.\", \"This work shows that the theory developed to derive an expression for the density of the data distribution can be applied to numerous modeling approaches to generative modeling.\", \"The mathematical derivations in Appendix A (which are the most important to me), seem correct.\"], \"weaknesses\": \"* Sec. 2: this section repurposes known results from the literature, including [1,2,3], in which it has been shown the equivalence between discrete-time and continuous-time variants of diffusion. Note also that [3], which is not cited by the authors, shows that \\\"*the log-likelihood of score-based models can be tractably computed through a connection to continuous normalizing flows, even though the log-likelihood is not directly optimized by the weighted combination of score matching losses. In particular, it is shown that under a specific weighting scheme the score-matching objective upper bounds the negative log-likelihood, thus enabling approximate maximum likelihood training of score-based models.*\\\"\\n* In sec 2.1, DDPM are revisited, but mixed with score functions, yielding Eq. 2.3. Why and how does the score function appears in discrete-time diffusion?\\n* In sec 2.2, I am curious to learn why Eq. 2.4 has been chosen to be so specific, instead of using a more general form with a functional drift term. Here you specify a linear drift whose coefficients explode, compared to the typical variance preserving formulation from [1], as time $t \\\\to 1$.\\n\\n[1] Song et al. \\u201cScore-Based Generative Modeling through Stochastic Differential Equations\\u201d, https://arxiv.org/abs/2011.13456\\n\\n[2] Ho et al. \\u201cDenoising Diffusion Probabilistic Models\\u201d, https://arxiv.org/abs/2006.11239\\n\\n[3] Song et al. \\u201cMaximum Likelihood Training of Score-Based Diffusion Models\\u201d, https://arxiv.org/abs/2101.09258\\n\\n* Sec. 3: This section displays some calculations that rely on the continuous-time formulation of diffusion processes. Sec. 3.1 begins by focusing on Eq. 2.4, which is the linear variance preserving SDE discussed above. Sec. 3.2 continues the derivations, to relate continuous-time and discrete-time known results, and Sec. 3.3 discusses known results on the equivalence to a probability flow ODE and more recent results on density estimation. What are the main take home messages here? What is the original contribution the authors would like to put forward in this section?\\nTo the best of my understanding, the result in Eq. 3.1.a is an exact formulation for the log likelihood of the data distribution that did not require, as done in [1,3], probability flow ODE equivalence. I followed the proof in Appendix A, and to my eyes it seems correct.\\nSec 3.2 should also deserve more insights provided by the authors, as it gives an approximate log density for the discrete case, bypassing the need to work directly in discrete-time. Can we quantify the discretization errors that are introduced by relying on Eq 3.1.c?\\n\\n* Sec. 4: This is an \\u201capplication\\u201d of the exact log density expression for the data distribution form Sec. 3.\\nSec. 4.1 aims at discussing the validity of the ELBO formulation as a good proxy for the log likelihood, to demonstrate that in DDPM optimizing the ELBO is a valid replacement for optimizing log likelihood. This can also be understood from [2] and [1] above, and, for continuous time, is readily discussed in [3], which also shows the similarity (modulo discretization errors and constants) between continuous-time and discrete time formulations. So, what do we learn from the derivations presented in this section that were not directly discussed in these earlier work?\\nSec. 4.2 begs for the same question, and should be reviewed in light of an overloaded notation: please check that $z$ is used both as a random variable sampled from a noise distribution, and as a normalizing factor.\\nSec 4.3 revisits classifier guidance mechanisms for conditional generation using diffusion models, and offers a critic to some practical heuristics used in recent work, based on the density defined in this paper.\\nSimilarly, Sec. 4.4 revisits autoregressive models in light of the proposed density definition, and suggest that the training objective used in the literature can be viewed as approximate maximum likelihood training.\", \"questions\": [\"In light of the comments about weaknesses above, can you clearly spell out what are the novel contributions of the submitted article? I am not against revisiting known results to set the stage for the main contributions, but I have the impression that most of the conclusions drawn in Sec. 4, which is where the authors use their revisited formulation of the log data density, have been known to the community, also from the theoretical point of view, and not only from an heuristic perspective. Can you also answer to the questions raised in the \\\"weaknesses\\\" section of this review?\", \"Despite the intelligible intent of reuniting continuous-time and discrete-time models, I find the exposition of results in Sec. 2 and Sec. 3, according to slightly different formulations than those existing in the literature, is confusing. Is there a way to organize this work such that contributions are more clear, and the implications of the presented theory spelled out well?\", \"Would you feel comfortable by stating that your exact formulation of the log density of the data distribution as a function of the drift and diffusion terms of the SDEs, or equivalently the log density of the data distribution as a function of the transition kernels and noise of the discrete-time diffusion, as a novel result that has not been discussed in the literature?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We sincerely thank you for the time and effort you invested in reviewing our submission. We greatly appreciate your feedback and suggestions, which have highlighted several areas for improvement in our work. We have decided to withdraw our submission, which allows us to address the concerns raised and improve the manuscript.\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"comment\": \"We sincerely thank you for the time and effort you invested in reviewing our submission. We greatly appreciate your feedback and suggestions, which have highlighted several areas for improvement in our work. We have decided to withdraw our submission, which allows us to address the concerns raised and improve the manuscript.\"}", "{\"comment\": \"We sincerely thank you for the time and effort you invested in reviewing our submission. We greatly appreciate your feedback and suggestions, which have highlighted several areas for improvement in our work. We have decided to withdraw our submission, which allows us to address the concerns raised and improve the manuscript.\"}" ] }
4dAgG8ma3B
Chemistry-Inspired Diffusion with Non-Differentiable Guidance
[ "Yuchen Shen", "Chenhao Zhang", "Sijie Fu", "Chenghui Zhou", "Newell Washburn", "Barnabas Poczos" ]
Recent advances in diffusion models have shown remarkable potential in the conditional generation of novel molecules. These models can be guided in two ways: (i) explicitly, through additional features representing the condition, or (ii) implicitly, using a property predictor. However, training property predictors or conditional diffusion models requires an abundance of labeled data and is inherently challenging in real-world applications. We propose a novel approach that attenuates the limitations of acquiring large labeled datasets by leveraging domain knowledge from quantum chemistry as a non-differentiable oracle to guide an unconditional diffusion model. Instead of relying on neural networks, the oracle provides accurate guidance in the form of estimated gradients, allowing the diffusion process to sample from a conditional distribution specified by quantum chemistry. We show that this results in more precise conditional generation of novel and stable molecular structures. Our experiments demonstrate that our method: (1) significantly reduces atomic forces, enhancing the validity of generated molecules when used for stability optimization; (2) is compatible with both explicit and implicit guidance in diffusion models, enabling joint optimization of molecular properties and stability; and (3) generalizes effectively to molecular optimization tasks beyond stability optimization. Our implementation is available at https://github.com/A-Chicharito-S/ChemGuide.
[ "guided diffusion", "ai4science", "molecule generation" ]
Accept (Poster)
https://openreview.net/pdf?id=4dAgG8ma3B
https://openreview.net/forum?id=4dAgG8ma3B
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zCWwvGYz6O", "wNZqr4Fdbz", "wA5jLsyEsR", "v4E3XmuzdL", "rqdr3Zf6wq", "qEaB87oDLv", "pSYkjIzOHt", "nHcLH4yqll", "ksM2DUELd7", "iMspT6LkPn", "hk1C9qcZok", "gsMCz0qEgE", "de2LIvWY2L", "Vz9BcTV1QN", "VPAbZyB4ge", "UrUPt5fBhg", "TsiHmsQFyl", "OnWzgp0ByY", "OB7A867nev", "MYcD67zzsd", "HC5tUH3eFd", "F6u5rrChtg", "DcjmHVP7dP", "BPQXAx5prt", "B75T8k5Il6", "8RhDj0Vs7T", "8BLGcJPJhX", "7yoVeYBjap", "6JhrpICHOQ", "4umnGLHieI", "07tQKnLccb" ], "note_type": [ "official_comment", "official_comment", "decision", "official_comment", "official_review", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment" ], "note_created": [ 1732399911115, 1732399892170, 1737524170729, 1732401143727, 1730701160772, 1730340200336, 1732498197881, 1730387724038, 1732400342442, 1733287513146, 1732399831733, 1732400644127, 1732541809344, 1733288047628, 1732400855359, 1732399442621, 1732401336189, 1733288177228, 1730330380888, 1732400656563, 1732400329088, 1732565434477, 1732400252141, 1733182839617, 1733288157754, 1732497142075, 1732399866991, 1732498023928, 1732401318633, 1734723974014, 1732400874373 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission12166/Authors" ], [ "ICLR.cc/2025/Conference/Submission12166/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission12166/Authors" ], [ "ICLR.cc/2025/Conference/Submission12166/Reviewer_V6At" ], [ "ICLR.cc/2025/Conference/Submission12166/Reviewer_tzoc" ], [ "ICLR.cc/2025/Conference/Submission12166/Reviewer_tzoc" ], [ "ICLR.cc/2025/Conference/Submission12166/Reviewer_vmGX" ], [ "ICLR.cc/2025/Conference/Submission12166/Authors" ], [ "ICLR.cc/2025/Conference/Submission12166/Authors" ], [ "ICLR.cc/2025/Conference/Submission12166/Authors" ], [ "ICLR.cc/2025/Conference/Submission12166/Authors" ], [ "ICLR.cc/2025/Conference/Submission12166/Reviewer_wXP8" ], [ "ICLR.cc/2025/Conference/Submission12166/Authors" ], [ "ICLR.cc/2025/Conference/Submission12166/Authors" ], [ "ICLR.cc/2025/Conference/Submission12166/Authors" ], [ "ICLR.cc/2025/Conference/Submission12166/Authors" ], [ "ICLR.cc/2025/Conference/Submission12166/Authors" ], [ "ICLR.cc/2025/Conference/Submission12166/Reviewer_wXP8" ], [ "ICLR.cc/2025/Conference/Submission12166/Authors" ], [ "ICLR.cc/2025/Conference/Submission12166/Authors" ], [ "ICLR.cc/2025/Conference/Submission12166/Reviewer_vmGX" ], [ "ICLR.cc/2025/Conference/Submission12166/Authors" ], [ "ICLR.cc/2025/Conference/Submission12166/Reviewer_V6At" ], [ "ICLR.cc/2025/Conference/Submission12166/Authors" ], [ "ICLR.cc/2025/Conference/Submission12166/Authors" ], [ "ICLR.cc/2025/Conference/Submission12166/Authors" ], [ "ICLR.cc/2025/Conference/Submission12166/Reviewer_tzoc" ], [ "ICLR.cc/2025/Conference/Submission12166/Authors" ], [ "ICLR.cc/2025/Conference/Submission12166/Area_Chair_aer3" ], [ "ICLR.cc/2025/Conference/Submission12166/Authors" ] ], "structured_content_str": [ "{\"comment\": \"Table 6 xTB force guidance using 200 guidance steps on 500 molecules sampled on QM9\\n\\n| Metric \\t| 0.0001 \\t| 0.001 \\t| 0.01 \\t| 0.1 \\t| 1.0 \\t| EDM \\t| GeoLDM \\t|\\n|-------------------------------|----------------------|----------------------|----------------------|----------------------|----------------------|------------|------------|\\n| **Force RMS (Eh/Bohr)** \\t| 0.0119 \\t| **0.0106*** (-4.28%\\u2193) | 0.0119 \\t| 0.0114 \\t| 0.0116 \\t| 0.0114 \\t| 0.0111 \\t|\\n| **Validity** \\t| 90.00% \\t| **91.00%*** (1.20%\\u2191) | 90.00% \\t| 90.60% \\t| 90.60% \\t| 86.60% \\t| 89.80% \\t|\\n| **Uniqueness** \\t| **100.00%*** \\t| **100.00%*** \\t| **100.00%*** \\t| **100.00%*** \\t| **100.00%*** \\t| 100.00%* | 100.00%* |\\n| **Atom Stability** \\t | 98.87% \\t| 98.72% \\t| 98.87% \\t| **98.90%*** (-0.03%\\u2193) | 98.86% \\t| 98.53% \\t| 98.93% \\t|\\n| **Molecule Stability** \\t| 89.00% \\t| 88.40% \\t| 89.00% \\t| **89.40%*** (0.60%\\u2191) | 89.00% \\t| 83.60% \\t| 88.80% \\t|\\n| **Energy above ground state (Eh)** | 0.0054 \\t| **0.0045*** (-9.32%\\u2193) | 0.0054 \\t| 0.0049 \\t| 0.0050 \\t| 0.0072 \\t| 0.0050\\t|\\n\\nTable 7 xTB force guidance using 300 guidance steps on 500 molecules sampled on QM9\\n\\n| Metric \\t| 0.0001 \\t| 0.001 \\t| 0.01 \\t| 0.1 \\t| 1.0 \\t| EDM \\t| GeoLDM \\t|\\n|-------------------------------|----------------------|----------------------|----------------------|----------------------|----------------------|------------|------------|\\n| **Force RMS (Eh/Bohr)** \\t| 0.0109 \\t| 0.0109 \\t| **0.0107*** (-3.98%\\u2193) | 0.0108 \\t| 0.0115 \\t| 0.0114 \\t| 0.0111 \\t|\\n| **Validity** \\t| **92.80%*** (3.00%\\u2191) | **92.80%*** (3.00%\\u2191) | 92.60% \\t| 92.20% \\t| 88.80% \\t| 86.60% \\t| 89.80% \\t|\\n| **Uniqueness** \\t| 99.57% \\t| **100.00%*** \\t| 99.58% \\t| 99.36% \\t| **100.00%*** \\t| 100.00%* | 100.00%* |\\n| **Atom Stability** \\t| 98.93% \\t| **99.09%*** (0.15%\\u2191) | 99.00% \\t| 98.99% \\t| 98.88% \\t | 98.53% \\t| 98.93% \\t|\\n| **Molecule Stability** \\t| 89.80% \\t| **90.80%*** (2.00%\\u2191) | 89.60% \\t| 89.60% \\t| 88.00% \\t| 83.60% \\t| 88.80% \\t|\\n| **Energy above ground state (Eh)** | 0.0049 \\t| 0.0052 \\t| **0.0045*** (-10.64%\\u2193) | 0.0046 \\t| 0.0056 \\t| 0.0072 \\t| 0.0050\\t|\\n\\n\\nTable 8 xTB force guidance using 400 guidance steps on 500 molecules sampled on QM9\\n\\n| Metric \\t| 0.0001 \\t| 0.001 \\t| 0.01 \\t| 0.1 \\t| 1.0 \\t| EDM \\t| GeoLDM \\t|\\n|-------------------------------|----------------------|----------------------|----------------------|----------------------|----------------------|------------|------------|\\n| **Force RMS (Eh/Bohr)** \\t| **0.0104*** (-6.76%\\u2193) | 0.0104 \\t| 0.0107 \\t| 0.0108 \\t| 0.0125 \\t| 0.0114 \\t| 0.0111 \\t|\\n| **Validity** \\t| **91.40%*** (1.60%\\u2191) | 91.20% \\t| 91.20% \\t | 90.00% \\t| 89.40% \\t| 86.60% \\t| 89.80% \\t|\\n| **Uniqueness** \\t| **100.00%*** \\t| **100.00%*** \\t| **100.00%*** \\t| **100.00%*** \\t| **100.00%*** \\t| 100.00%* | 100.00%* |\\n| **Atom Stability** \\t| 99.02% \\t| 98.97% \\t| 98.98% \\t| **99.03%*** (0.10%\\u2191) | 98.67% \\t| 98.53% \\t| 98.93% \\t|\\n| **Molecule Stability** \\t| 90.60% \\t| 90.40% \\t| 90.20% \\t| **91.00%*** (2.20%\\u2191) | 87.80% \\t| 83.60% \\t| 88.80% \\t|\\n| **Energy above Ground State (Eh)** | **0.0042*** (-15.78%\\u2193) | 0.0042 \\t| 0.0045 \\t| 0.0050 \\t| 0.0061 \\t| 0.0072 \\t| 0.0050 |\"}", "{\"comment\": \"### **2. Relaxation of xTB: Fewer guidance steps**\\n- **Non-consecutive guidance**: this is called skip-step, as an acceleration method, where we add guidance every $N$ steps. Details can be found in **Appendix H1** (previously F1). We explore adding guidance every 3 and 5 steps. It turns out that in general more guidance steps, better results. The results are reported in below table:\\n\\nTable 4 xTB force guidance using skip-step acceleration on 500 molecules sampled on QM9\\n\\n| Metric | Every 1 Step (s) | Every 3 Steps (s) | Every 5 Steps (s) | EDM | GeoLDM |\\n|-------------------------------|--------------------|---------------------|---------------------|------------|------------|\\n| **Force RMS (Eh/Bohr)** | **0.0104*** (-6.76%\\u2193) | 0.0106 | 0.0011 | 0.0114 | 0.0111 |\\n| **Validity** | **91.40%*** (1.60%\\u2191) | 91.40% | 89.40% | 86.60% | 89.80% |\\n| **Uniqueness** | **100.00%*** | 99.79% | 100.00% | 100.00% | 100.00% |\\n| **Atom Stability** | 99.02% | 99.01% | 98.63% | 98.53% | 98.93% |\\n | **Molecule Stability** | 90.60% | 90.60% | 87.20% | 83.60% | 88.80% |\\n | **Energy above ground state (Eh)** | **0.0042*** (-15.78%\\u2193) | 0.0045 | 0.0055 | 0.0072 | 0.0050|\\n\\n- **Consecutive yet delayed guidance**: instead of adding 400 guidance steps, we explore adding 100, 200, and 300 guidance steps. Details can be found in **Appendix H.2**. It turns out that in general more guidance steps, better results. The results are reported in below table:\\n\\nTable 5 xTB force guidance using 100 guidance steps on 500 molecules sampled on QM9\\n\\n| Metric \\t| 0.0001 \\t| 0.001 \\t| 0.01 \\t| 0.1 \\t| 1.0 \\t| EDM \\t| GeoLDM \\t|\\n|-------------------------------|----------------------|----------------------|----------------------|----------------------|----------------------|------------|------------|\\n| **Force RMS (Eh/Bohr)** \\t| **0.0110*** (-0.88%\\u2193) | **0.0110*** (-0.88%\\u2193) | 0.0110 \\t| 0.0113 \\t| 0.0120 \\t| 0.0114 \\t| 0.0111 \\t|\\n| **Validity** \\t| 90.40% \\t| 90.40% \\t| 90.40% \\t| **90.80%*** (1.00%\\u2191) | 90.60% \\t| 86.60% \\t| 89.80% \\t|\\n| **Uniqueness** \\t| **100.00%*** \\t| **100.00%*** \\t| **100.00%*** \\t| **100.00%*** \\t| **100.00%*** \\t| 100.00%* | 100.00%* |\\n| **Atom Stability** \\t| **98.94%*** (0.01%\\u2191) | **98.94%*** (0.01%\\u2191) | **98.94%*** (0.01%\\u2191) | 98.91% \\t| 98.92% \\t| 98.53% \\t| 98.93% \\t|\\n| **Molecule Stability** \\t| 89.40% \\t| 89.40% \\t| 89.40% \\t| **89.60%*** (0.80%\\u2191) | **89.60%*** (0.80%\\u2191) | 83.60% \\t| 88.80% \\t|\\n| **Energy above ground state (Eh)** | 0.0051 \\t| **0.0051*** (0.87%\\u2191) | 0.0051 \\t| 0.0055 \\t| 0.0057 \\t| 0.0072 \\t| 0.0050\\t|\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"comment\": \"Thank you for the detailed suggestions on the presentation of our paper! We address the weaknesses and questions as follows.\\n\\n### **1. Justification of SPSA & Comparison to other methods**\\n\\n- **SPSA justification**: we have added more descriptions and our motivation for SPSA in **Appendix F** Motivation for Gradient Estimation with SPSA. We previously considered using finite difference to calculate the gradient as $\\\\frac{F([z_{x, t}+\\\\zeta 1_{N \\\\times 3}, z_{h, t}])-F([z_{x, t}-\\\\zeta 1_{N \\\\times 3}, z_{h, t}])}{2\\\\zeta}$, however, it violates the zero-mean requirements for 3D diffusion models to be equivariant (i.e., $z_{x, t}\\\\pm\\\\zeta 1_{N \\\\times 3}$ shifts the mean by $\\\\pm\\\\zeta$), which motivates the choice of our SPSA based estimation (i.e., the perturbation is sampled from the normal distribution).\\n\\n- **Alternative methods**: we have conducted new experiments and added the results for evolutionary algorithms in the (global) official comments above. (please see **Appendix I** in our current revised paper). It turns out that ChemGuide outperforms the evolutionary algorithm in terms of force RMS and energy about ground state.\\n\\n### **2. Quality of the guidance gradients**\\n\\nWe employed a neural network potential AIMNet2 [1] to calculate the force, and compared its backpropagated gradient with ChemGuide measured in cosine similarity in Appendix H3 ChemGuide vs. Neural Guidance on Force. We can observe that the cosine similarity over time oscillates around 0, suggesting that the guidance from ChemGuide and neural networks are different, hence the discrepancy in performance.\\n\\nWe report the results of neural network guidance on force in the (global) official comments above. (**Appendix H3** in our current revised paper). We observe that using neural networks for force guidance **underperforms ChemGuide**, and significantly increases memory demands, which makes guidance on large molecules extremely difficult (e.g., a 48 GiB GPU can only generate 5 molecules at a time on GEOM, making sampling a large amount of molecules almost linear in time). \\n\\n### **3. Compatibility between guidance methods and property**\\n\\nFrom the chemistry definitions, there is no direct evidence to suggest that one property would be harder or easier than another for the neural regressor to learn. However, we can observe from Figure 7 on the generalization analysis of neural regressor, that on property $\\\\alpha$ the regressor is more robust in terms of perturbations compared to other properties, meanwhile, in Table 4, noisy guidance performs better than clean guidance on $\\\\alpha$, and vice the versa on other properties. \\nWe hypothesize that this is because noisy guidance requires backpropagation through the diffusion model, while clean guidance translates the optimized representation $x_0$ in the clean space back to $x_t$ with pre-defined schedules. Thus, with a robust regressor (e.g., on $\\\\alpha$), noisy guidance might be preferred over clean guidance as it directly backpropagates to $x_t$; given a less precise regressor on out-of-distribution molecules, clean guidance would be favored as the errors are more likely to be broadcast through both the VAE decoder and the latent diffusion model for noisy guidance. \\n\\nHowever, our analyses are primitive and do not include many factors (e.g., different architectures of the diffusion models), and further investigation in this direction would be very valuable.\"}", "{\"summary\": \"The paper proposes CHEMGUIDE, a approach that uses non-differentiable guidance for the conditional generation of molecular structures in diffusion models. CHEMGUIDE use quantum chemistry oracles as guidance, providing gradients for sampling distributions. The method applies zeroth-order optimization to enhance molecule stability by minimizing atomic forces. Experiments demonstrate CHEMGUIDE\\u2019s effectiveness on two datasets.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. CHEMGUIDE\\u2019s use of quantum chemistry as a non-differentiable oracle in conditional diffusion models is meaningful.\\n2. The paper reports improvements in stability and force metrics over baselines.\\n3. Implementing zeroth-order optimization in a diffusion context is well-justified.\", \"weaknesses\": \"1. The paper could enhance its rigor by comparing CHEMGUIDE with more baseline models, such as MolGAN, and more importantly, other existing guidance methods. The current comparisons seem limited. It would also be comprehensive to experiment with more datasets.\\n2. While GFN2-xTB is a reasonable compromise, comparing CHEMGUIDE results against high-accuracy methods like DFT more extensively could help validate the chemical accuracy of generated molecules.\\n3. The paper lacks a thorough discussion on the limitations of using a non-differentiable oracle, such as the potential difficulty in handling certain molecular configurations or diverse chemical spaces. \\n4. The use of the GFN2-xTB method and bilevel optimization adds computational complexity, which could restrict practical usage. And The guidance scale parameter lacks an adaptive mechanism. Exploring automated scale scheduling would improve usability.\\n5. Code is not provided.\", \"questions\": \"Please see Weaknesses\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper studies how to achieve derivative-free molecular optimization with diffusion models. The main motivation for this work is that many real-world molecular properties are sophisticated and can only be evaluated through expensive experiments or non-differential simulations. In this paper, a zero-order optimization method is constructed by perturbing the input molecular conformation and the effect on the molecular properties. The effectiveness of the proposed methods is validated on a set of quantum mechanical properties for small molecules.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": [\"This paper studies an important and timely problem, how to move beyond generative models (randomly sample from learned distributions) to efficient search and optimization over the space with guidance is a pressing question in molecular generation.\", \"Derivative-free guidance is very well-motivated and I agree that it is of great importance to problems in real-world molecular discovery process.\"], \"weaknesses\": \"I have several concerns about this paper, mostly coming from the claim, related work, and experiments.\\n\\n* First of all, the idea of derivative-free guidance is not new, in molecular optimization, evolutionary algorithms have been used [1], twisted sequential Monte Carlo for protein design [2], twisted SMC for large language model inference [3], and stochastic optimal control method for music generation [4]. I believe the claim for this work to be the first of its kind is inappropriate, neither for derivative-free guidance nor in molecular design.\\n\\n* Given this is not new, the related work section in the Appendix only discusses the general molecule generation literature and part of the guided diffusion model literature, but misses the critical relevant literature both in molecular generation and other domains.\\n\\n* The experimental results are weak, even if I agree on the general motivation of derivative-free guidance, (1) there are works such as simple evolutionary algorithms [1] and twisted SMC [2] available for comparison; even if you do not want to compare against them, you need to compare with gradient-based method --- if you think about the experiment budget, you can always construct a classifier by the samples you have evaluated, e.g. a trained neural network. Despite this may not generalize OOD or perform badly, but you may still include them as baselines. For more potential baselines to compare against, you can check this benchmark [5].\\n\\n[1] Schneuing, A., Du, Y., Harris, C., Jamasb, A., Igashov, I., Du, W., Blundell, T., Li\\u00f3, P., Gomes, C., Welling, M. and Bronstein, M., 2022. Structure-based drug design with equivariant diffusion models. arXiv preprint arXiv:2210.13695.\\n\\n[2] Wu, L., Trippe, B., Naesseth, C., Blei, D. and Cunningham, J.P., 2024. Practical and asymptotically exact conditional sampling in diffusion models. Advances in Neural Information Processing Systems, 36.\\n\\n[3] Zhao, S., Brekelmans, R., Makhzani, A. and Grosse, R.B., Probabilistic Inference in Language Models via Twisted Sequential Monte Carlo. In Forty-first International Conference on Machine Learning.\\n\\n[4] Huang, Y., Ghatare, A., Liu, Y., Hu, Z., Zhang, Q., Sastry, C.S., Gururani, S., Oore, S. and Yue, Y., Symbolic Music Generation with Non-Differentiable Rule Guided Diffusion. In Forty-first International Conference on Machine Learning, 2024.\\n\\n[5] https://github.com/brandontrabucco/design-bench\", \"questions\": \"Most of my questions are about experiments, I feel the current experimental comparisons are too weak (only comparing with unconditional generation), see weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N/A\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Sorry! Corrected\", \"comment\": \"Sorry for the mistake! Corrected!\"}", "{\"summary\": \"The authors propose CHEMGUIDE, a sampling algorithm to guide pre-trained (latent) diffusion models for molecule design with the goal to optimize for stable molecules indicated by smaller force norms when evaluated using the xTB oracle functions. As the xTB oracle function, which outputs the forces per atom in a molecule, is non-differentiable, the authors make use of known gradient approximation from random pertubation theory to approximate the gradients suitable for guidance during the diffusion sampling trajectory. The authors also suggest how their non-differentiable guidance can be combined with neural regressors as commonly done in the diffusion models literature. The authors show that they non-differentiable guidance when applied on GeoLDM leads to generated molecules with lower force norms compared to the samples when GeoLDM is used without the proposed guidance, indicating that their method works for the models trained on common benchmark datasets such as QM9 and GEOM-Drugs.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The usage of approximate gradients from non-differentiable oracles in diffusion guidance for molecule design is novel and interesting.\\nTo evaluate their proposed method, the authors run a suite of multiple experiments showing that on the two common benchmarks the generated samples based on their sampling algorithm have improved evaluation metrics compared to the baselines.\", \"weaknesses\": \"The guidance component from the non-differentiable oracle shows small improvement on the QM9 and GEOM-Drugs dataset. While the idea is interesting, a stronger baseline to compare against is to use the samples generated from Baseline GeomLDM and perform a relaxation using xTB. As the authors mention in their appendix A - Implementation Details, the sampling time for 100 molecules is quite slow with 6 hours and 18 minutes if they perform their proposed guidance in the last 400 diffusion timesteps using xTB as oracle.\\nHow does the GeoLDM baseline (right column) in Table 1 and 2 compare if xTB is used using a pre-defined number of relaxation steps?\\n\\nFurthermore, I found it hard to read the paper as Section 3 Methodology contains sections 3.1 and 3.2 which are not the author's contribution but already existing methods. I would move these Section 2 within the preliminaries.\", \"questions\": \"Is there a typo in the numerator in Eq. 15 when approximating the gradient? It should say $\\\\frac{ \\\\mathcal{F}[z_{x,t} + c U, z_{h,t}] - \\\\mathcal{F}[z_{x,t} - c U, z_{h,t}] )}{2c}$.\\n\\nIn Algorithm 1, the approximated gradient g_{t-1} has a dependency to the state at time $t=0$. Is this is a typo, since Eq. 15 does not refer to the clean data prediction.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We hope the above information helps address your concerns and questions. We are happy to answer any future questions you may have. Thank you!\\n\\nWe included additional results and summaries in <https://openreview.net/forum?id=4dAgG8ma3B&noteId=UrUPt5fBhg>.\"}", "{\"comment\": \"Thank you for your initial reviews on suggesting adding more baselines (evolutionary algorithms, neural network for force prediction, twisted SMC). In our revised paper, we have added 1. evolutionary algorithm [result](https://openreview.net/forum?id=4dAgG8ma3B&noteId=8BLGcJPJhX); 2. neural network for force prediction [result](https://openreview.net/forum?id=4dAgG8ma3B&noteId=hk1C9qcZok).\\n\\nIn our previous rebuttal, we have pointed out tSMC [1] requires $p_{y|x^0}(y|x^0)$ and its derivative and thus not directly comparable to our works, and their task in protein binding is intrinsically different from ours in molecule optimization. \\n\\nGiven your most recent comments on **\\u201cfor twisted SMC, \\u2026, additional gradient \\u2026 is not necessary\\u201d**, we would like to refer to Algorithm 1 in [1], where in line 6 they explicitly requires $\\\\nabla_{x_k^{t+1}} p_k^{t+1}$, and $p_k^{t}=p_{\\\\theta}(y|x_k^t)=p_{y|x^0}(y|\\\\hat{x}_{\\\\theta}(x^t))$ is defined in eq. (8). Without this gradient, it is not the **correct** twisted proposal function defined in eq. (9) and (10), so **it's necessary** to compute the gradient in tSMC. \\n\\nFurthermore, to calculate $p_{y|x^0}(y|x^0)$ is impossible for stability, because **y** here will be a #atoms$\\\\times 3$ tensor in $R^d$, and just for the same molecule, there are infinitely many **y**s given the different 3D structure of the same set of atoms. Hence, even we are given enough resources to explore the entire molecule space, it is impractical to estimate this $p_{y|x^0}(y|x^0)$ for stability.\\n\\nTo summarize, tSMC is not comparable in our setting because 1. The **correct** twisted proposal requires gradients, which is a bottleneck of introducing quantum chemistry to diffusion models before our method; 2. even if we are to use the **wrong** twisted proposal without gradients, the twisted weighting function requires $p_{y|x^0}(y|x^0)$, which is impossible to compute.\\n\\nFinally, as we explained in the Introduction section and re-emphasized in 1. Motivation of our rebuttal [here](https://openreview.net/forum?id=4dAgG8ma3B&noteId=UrUPt5fBhg), the **innovation** is to introduce **quantum chemistry for diffusion guidance**, instead of gradient estimation:\\n\\n\\u201cfor molecule optimization, we aim to achieve controllable generation at inference time with quantum chemistry software to bypass the requirement to train a neural property predictor or a conditional diffusion model that needs labels.\\u201d and we use gradient estimation \\u201cto overcome the difficulty that we can not backpropagate through the chemistry software.\\u201d\\n\\n[1] Wu, L., Trippe, B., Naesseth, C., Blei, D. and Cunningham, J.P., 2024. Practical and asymptotically exact conditional sampling in diffusion models. Advances in Neural Information Processing Systems, 36.\"}", "{\"comment\": \"We address common weaknesses and questions as follows.\\n\\n### **1. More baselines**\\n#### **- AIMNet2**\\n\\nwe replaced xTB with the neural network potential AIMNet2 for guiding force optimization. Detailed results and analysis are presented in **Appendix H.3**. We evaluated its performance by sampling 500 molecules from QM9 and 50 molecules from GEOM. We found two significant drawbacks of using neural networks as guidance:\\n\\n**Performance**: Using AIMNet2 for guidance negatively impacted performance. Specifically, for QM9, ChemGuide consistently outperforms AIMNet2 across all metrics. For GEOM, while AIMNet2 achieves similar results to ChemGuide in terms of force, validity, and stability, it struggles with the energy above the ground state. This indicates the optimization process is prone to being trapped in local minima, far from the global minima. \\n\\nTo understand the difference, we analyzed the cosine similarity between the guidance provided by ChemGuide and AIMNet2. The results showed that their gradients are nearly orthogonal, which accounts for the observed performance differences.\\n\\n**Memory Constraints**: AIMNet2 outputs forces in the shape [N, 3]. Backpropagating through these forces to compute guidance requires the Hessian matrix, resulting in a memory complexity of [N, 3, N, 3]. This complexity increases by a factor of $3N$ which significantly limits scalability. For GEOM, we could only sample 5 molecules at a time on a 48GiB GPU. In contrast, ChemGuide allowed us to sample 100 molecules per batch on the same device.\\n\\nWe report the performances below. * and **bold** denote the overall best result and our best result, respectively. Percentage changes between our results and GeoLDM are shown in parentheses.\", \"table_1\": \"AIMNet neural force guidance on 500 molecules sampled on QM9\\n\\n| Metric \\t| 0.0001 \\t| 0.001 \\t| 0.01 \\t| 0.1 \\t| 1.0 \\t| ChemGuide\\t| EDM \\t| GeoLDM \\t|\\n|---------------------|----------------------|----------------------|----------------------|----------------------|-----------------------|---------------------|------------|------------|\\n| **Force RMS (Eh/Bohr)** \\t| 0.0113 \\t | 0.0113 \\t| 0.0112 \\t| 0.0114 \\t| 0.0110 (-1.00%\\u2193) \\t| **0.0104*** (-6.76%\\u2193) | 0.0114 \\t| 0.0111 \\t|\\n| **Validity** \\t| 89.20% (-0.60%\\u2193)\\t| 89.20% (-0.60%\\u2193)\\t| 89.00% \\t| 88.60% \\t| 89.00% \\t| **91.40%*** (1.60%\\u2191) | 86.60% \\t| 89.80% \\t|\\n| **Uniqueness** \\t| **100.00%*** \\t| **100.00%*** \\t| **100.00%*** \\t| **100.00%*** \\t| **100.00%*** \\t| **100.00%*** \\t| 100.00%* | 100.00%* |\\n| **Atom Stability** | 98.72% \\t| 98.72% \\t| 98.71% \\t| 98.74% \\t | 98.75% (-0.18%\\u2193) \\t| **99.02%*** (0.09%\\u2191) | 98.53% \\t| 98.93% \\t|\\n| **Molecule Stability** | 87.80% \\t| 87.80% \\t| 87.60% \\t| 87.80% \\t| 88.00% (-0.80%\\u2193) \\t| **90.60%*** (1.80%\\u2191) | 83.60% \\t| 88.80% \\t|\\n| **Energy above ground state (Eh)** | 0.0056 \\t| 0.0057 \\t| 0.0056 \\t| 0.0059 \\t| 0.0055 (9.30%\\u2191) \\t| **0.0042*** (-15.78%\\u2193) | 0.0072 \\t| 0.0050\\t|\", \"table_2\": \"AIMNet neural force guidance on 50 molecules sampled on GEOM\\n\\n| Metric \\t| 0.0001 \\t| 0.001 \\t| 0.01 \\t| 0.1 \\t| 1.0 \\t| ChemGuide \\t| EDM \\t| GeoLDM \\t|\\n|-------------------------------|---------------------|---------------------|---------------------|---------------------|----------------------|---------------------|------------|------------|\\n| **Force RMS (Eh/Bohr)** \\t| 0.0418 \\t| 0.0418 \\t| 0.0418 \\t| 0.0409 \\t| **0.0406*** (-15.14%\\u2193) | 0.0411 (-14.16%\\u2193) | 0.0742 \\t| 0.0478 \\t|\\n| **Validity (xTB)** \\t| 50.00% (1.00%\\u2191)\\t| 50.00% \\t| 50.00%*** \\t| 50.00% \\t| 50.00% \\t| **50.40%*** (1.40%\\u2191) | 46.40% \\t| 49.00% \\t|\\n| **Uniqueness** \\t| **100.00%*** \\t| **100.00%*** \\t| **100.00%*** \\t| **100.00%*** \\t| **100.00%*** \\t| **100.00%*** \\t| 100.00%* | 100.00%* |\\n| **Atom Stability** \\t| 85.01% \\t| 85.01% \\t| 85.01% \\t| 85.13% \\t| **85.26%*** (0.73%\\u2191) | 84.36% (-0.17%\\u2193)\\t| 81.22% \\t| 84.53% \\t|\\n| **Energy above ground state (Eh)** | 0.2392 \\t| 0.2392 \\t| 0.2392 \\t| 0.2356 \\t| 0.2299 (2.26%\\u2191)\\t| **0.1935*** (-13.92%\\u2193) | 0.3742 \\t| 0.2248\\t|\"}", "{\"comment\": \"Thank you for taking the time to review! We appreciate your feedback and address the weaknesses and questions as follows.\\n\\n### **1. Relaxation with xTB**\\n\\n- **Comparing to GeoLDM with xTB relaxation** We have included this baseline in tables 1-3 by reporting \\u201cenergy above ground state\\u201d, which is defined in Appendix B as \\u201c the difference between the energy before and after using xTB or DFT (specified in the context) to optimize the geometry of the generated molecules\\u201d. We can observe that **ChemGuide consistently achieves the smallest energy difference before and after quantum chemistry optimization** at different levels (xTB, DFT) across datasets (QM9, GEOM). \\n\\n- **Relaxation inside xTB** The number of geometric optimization steps is decided by xTB given the specific input molecule, which is a built-in and required process that can not be reduced or relaxed to achieve speed-up. \\n\\n- **Relaxation on the guidance steps of ChemGuide** There are two ways to accelerate ChemGuide, 1. add guidance for every $n$ step; 2. add less consecutive guidance steps. \\n\\n - **Every $n$ step guidance**, the results are in **Appendix H1** (previously Appendix F1), where we choose $n=1, 3, 5$. We can observe that **a**. $n=1$ (no relaxation) performs the best, suggesting a trade-off between time and performance, and **b**. ChemGuide with $n=3, 5$ skip steps performs better than the baselines with smaller forces and energy above the ground state, demonstrating the effectiveness of our method. \\n\\n - **Less consecutive guidance steps** In the global official comment above, we conduct experiments that apply the guidance for 100, 200, and 300 steps to generate 500 molecules, and report the results (updated in **Appendix H2** in our current revised submission). Similar trends can be observed that reducing guidance steps lowers performance.\\n\\n- **Relaxation on replacing xTB with neural networks** In the global official comment in Table 3-8 above, we also relax the dependency on xTB and replace xTB with a neural network AIMNet2 [1], and report the results (detailed results and analysis are in H.3 of our current revised submission). We can observe that using a neural network as guidance hurts the performance, and further, significantly increases GPU memory demands. \\n\\n- **Hardware level relaxation**, we can increase the number of CPU cores assigned to xTB when calculating force, and this functionality is implemented in our code.\\n\\n### **2. Order of Section 3.1**\\n\\nWe changed the order accordingly and moved Section 3.1 on noisy and clean guidance into Section 2 Preliminary. Thank you for your constructive suggestion to present our paper clearly!\\n\\n### **3. Eq. 15 typo**\\nWe fixed the typo accordingly. Thank you for your constructive suggestion to present our paper clearly!\\n\\n### **Reference**\\n[1] Dylan Anstine, Roman Zubatyuk, and Olexandr Isayev. Aimnet2: a neural network potential to\\nmeet your neutral, charged, organic, and elemental-organic needs. 2024.\"}", "{\"comment\": \"Thank you for providing new experiments and revising your manuscript to incorporate the changes requested.\\n\\nOverall, while I agree with the other reviewers about the existence of multiple non-differentiable guidance methods, I find the simplicity of the current method (using zeroth-order optimization to approximate the gradients of the oracle) appealing, especially in light of the discussion the authors provided when comparing to evolutionary algorithms and a property predictor neural network. \\n\\nI will thus maintain my score.\"}", "{\"comment\": \"Thank you for your response.\\n- While [1, 2] can accommodate non-differentiable reward functions, their methods are fundamentally different from ours. As described in Algorithm 1 of [2] and Section 4.2 of [1], these methods use the reward function to **fine-tune** their pre-trained models. In contrast, our approach is **training-free**, as we only apply xTB during inference, as highlighted in our introduction. This makes our method both simpler and more computationally efficient.\\n\\n- **Furthermore, since [2] comes after the ICLR deadline, it's impossible for us to know it in advance**.\\n\\n- For the questions raised by review tzoc, regarding more baselines, such as tSMC, to compare, please refer to [here](https://openreview.net/forum?id=4dAgG8ma3B&noteId=iMspT6LkPn). In summary, tSMC is different from ours because it requires gradient calculation. \\n\\n[1] Amortizing Intractable Inference in Diffusion Models for Vision, Language, and Control\\n\\n[2] Steering Masked Discrete Diffusion Models via Discrete Denoising Posterior Prediction\"}", "{\"comment\": \"Thank you for taking the time and we appreciate your input to help make our paper a better work! We address the weaknesses and questions as follows.\\n\\n### **1. Novelty and motivation**\\nWe would like to re-emphasize that our focus is on, **for molecule optimization**, how to achieve controllable generation at inference time from quantum chemistry to bypass the requirement to train a neural property predictor or a conditional diffusion model, which **require a large amount of labels** that are hard to acquire. We choose stability (defined only on 3D) to explore this motivation, and hence experiment on 3D diffusion models. \\n\\nWe are not proposing a new guidance method (e.g., as in [2]) nor are claiming to be the first to introduce derivative-free guidance to diffusion models.\\nIn fact, our method builds on previous proposed guided diffusion methods [3, 4], and the use of zeroth-order optimization techniques [5, 6] for gradient estimation is to overcome the difficulty that we can not backpropagate the chemistry software.\\n\\nThe evolution algorithm [1] relies on stochastic mutations and pruning, which requires a large search space to be effective and do not directly change the underlying **conditional** distribution from which the diffusion model samples. Gradients provide informative guidance by indicating the correct direction for optimizing stability. Although the evolutionary algorithm selects the best variant at each evolutionary step, it doesn\\u2019t rely on the gradient, so optimization direction is essentially random, making the process less controllable.\\n\\nFor tSMC[2], it proposes a novel guidance method and **requires $p_{y|x^0}(y|x^0)$, which is a classifier that is trained on labels** (refer to Section 5.2 and 6.2 in [2] on the choice of such likelihood $p$) and thus not directly comparable to our works, further, they experiment on protein binding, which is intrinsically different from our task in molecule optimization.\\n\\nWe have excluded the sentence \\u201cTo the best of our knowledge, this is the first work of its kind.\\u201d in the introduction to avoid further confusion in our current revised paper. \\n\\n### **2. Related works**\\nFor the broader discussion of guided diffusion beyond the scope of our work on molecule optimization, we have added the papers you suggested to enlighten further discussions.\\n\\n### **3. Additional experiments**\\nAs stated above, our motivation is not derivative-free guidance but to bypass the need to train a neural predictor or a conditional diffusion model to achieve controllable generation.\\n- **Evolutionary algorithm**: we added the results using the evolutionary algorithm [1,7] to the global comment above, please also see **Appendix I** for detailed results and analysis. We can observe ChemGuide outperforms the evolutionary algorithm in terms of force RMS and energy above the ground state.\\n- **Neural network guidance**: we replaced the non-differentiable oracle with a neural network potential AIMNet2, the results are reported in the global comment above, please also see **Appendix H3** for detailed results and analysis. It turns out that using AIMNet2 for guidance negatively impacted performance. \\n\\n### **References**\\n[1] Schneuing, A., Du, Y., Harris, C., Jamasb, A., Igashov, I., Du, W., Blundell, T., Li\\u00f3, P., Gomes, C., Welling, M. and Bronstein, M., 2022. Structure-based drug design with equivariant diffusion models. arXiv preprint arXiv:2210.13695.\\n\\n[2] Wu, L., Trippe, B., Naesseth, C., Blei, D. and Cunningham, J.P., 2024. Practical and asymptotically exact conditional sampling in diffusion models. Advances in Neural Information Processing Systems, 36.\\n\\n[3] Prafulla Dhariwal and Alex Nichol. Diffusion models beat gans on image synthesis, 2021.\\n\\n[4] Clement Vignac, Igor Krawczuk, Antoine Siraudin, Bohan Wang, Volkan Cevher, and Pascal Frossard. Digress: Discrete denoising diffusion for graph generation. arXiv preprint arXiv:2209.14734, 2022.\\n\\n[5] James C. Spall. Multivariate stochastic approximation using a simultaneous perturbation gradient approximation. IEEE Transactions on Automatic Control, 37:332\\u2013341, 1992. URL https://api.semanticscholar.org/CorpusID:122365276.\\n\\n[6] Yurii Nesterov and Vladimir Spokoiny. Random gradient-free minimization of convex functions. Foundations of Computational Mathematics, 17(2):527\\u2013566, 2017\\n\\n[7] Huang, Y., Ghatare, A., Liu, Y., Hu, Z., Zhang, Q., Sastry, C.S., Gururani, S., Oore, S. and Yue, Y., Symbolic Music Generation with Non-Differentiable Rule Guided Diffusion. In Forty-first International Conference on Machine Learning, 2024.\"}", "{\"comment\": [\"We thank all reviewers for taking the time to review and for their constructive feedback! We appreciate their effort and suggestions to make our paper a better work! We summarize the changes of our current revised paper (in blue text) as follows.\", \"**List of content (Appendix A)**: for convenience of reading Appendix, we added a list of content to Appendix A.\", \"**More experiments**:\", \"**Effect of guidance steps (Appendix H2)**: we explored reducing guidance steps from 400 to 100, 200, and 300 in Appendix H.2.\", \"**AIMNet2 (Appendix H3)**: we replaced the non-differentiable oracle xTB with a differentiable neural network potential AIMNet2 for guiding force optimization, and compared the gradients from xTB and AIMNet2 by plotting their cosine similarities.\", \"**Other optimization algorithms (Appendix I)**: we used an evolutionary algorithm for force optimization in Appendix I.\", \"We would like to summarize our motivations as follows.\", \"### **1. Motivation**\", \"for molecule optimization, we aim to achieve controllable generation at inference time with quantum chemistry software to bypass the requirement to train a neural property predictor or a conditional diffusion model that needs labels.\", \"We choose stability (defined by 3D molecule structure) as the molecule property to explore, hence using a 3D diffusion backbone.\", \"Our method builds on previously proposed guided diffusion methods, and the use of zeroth-order optimization techniques for gradient estimation is employed to overcome the difficulty that we can not backpropagate through the chemistry software.\", \"### **2. Importance of stability optimization**\", \"To generate **3D** molecules (atoms and coordinates) that are valid and **stable** with close-to-ground-state geometries.\", \"We introduced a more strict, robust, and challenging stability metric, energy above ground state, than the current mainstream literature metric with chemical valencies.\", \"To generate **stable** 3D molecules of **conditioned quantum chemical properties**.\", \"The motivation behind generating stable close-to-ground-state molecules is that molecular properties are in many cases geometry-related. And ground state geometries are important.\", \"Take the example in Appendix D - say we need to generate 3D molecules that are non-polar, a generative model can very well generate a line-shaped H-O-H water molecule, which is non-polar and correct in valencies. But a line-shaped water molecule is not considered stable and thus this generated 3D molecule can be misleading in real-life applications. Our work aims to avoid such scenarios and generate **stable** molecules by introducing and estimating quantum mechanical (QM) guidance.\"]}", "{\"comment\": \"We hope the above information helps address your concerns and questions. We are happy to answer any future questions you may have. Thank you!\\n\\nWe also included additional results and summaries in <https://openreview.net/forum?id=4dAgG8ma3B&noteId=UrUPt5fBhg>.\"}", "{\"comment\": \"Thank you very much for your engagement in the discussion and valuable suggestions to make our paper better.\"}", "{\"summary\": \"The authors propose ChemGuide, a method for estimating diffusion guidance (i.e. gradients) from a non-differentiable property predictor. The goal is to eliminate the need for labeled training data typically required for property prediction networks. They demonstrate their approach in the context of 3D molecular generation. ChemGuide enables an unconditional diffusion model to generate more stable 3D molecular structures by incorporating guidance from quantum chemistry calculations (GFN2-xTB) that serve as a non-differentiable oracle.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": [\"**Strong and relevant contribution**: The idea proposed in the paper has clearly relevant applications in a number of domains (i.e. any field with expensive oracles) and naturally extends existing efforts in the field of diffusion models.\", \"**Novelty**: The method is conceptually simple yet novel, opening the door for various applications which could benefit from the guidance of a non-differentiable oracle.\", \"**Thorough empirical evaluation**: The paper presents thorough empirical analysis, effectively demonstrating both the strengths and limitations of the proposed method. The experiments span multiple datasets (QM9 and GEOM), various molecular properties, and different guidance approaches (explicit, implicit, and combined).\", \"**Extensive analysis**: the empirical observations are grounded in real-world chemistry insights, with careful analysis of failure cases and performance trade-offs.\", \"**Clarity of presentation**: the paper is well-written and includes many relevant and well-designed figures.\"], \"weaknesses\": [\"**Justification of the zeroth-order optimization and comparison to other non-gradient optimization methods**: The paper lacks clear justification for choosing SPSA as the zeroth-order optimization method. Section 3.2 would benefit from a discussion of alternative approaches (e.g., finite differences, evolution strategies) and justification for their specific choice in terms of computational efficiency and accuracy trade-offs, and suitability for this particular application of guiding molecular diffusion models. A short pragaraph would suffice here.\", \"**Assessing the quality of the gradients obtained with ChemGuide vs a differentiable regressor** While the paper shows final performance metrics, it lacks direct analysis comparing the gradients estimated via zeroth-order optimization to those from a differentiable regressor. Such comparison could provide insights into how reliable are CHEMGUIDE's estimated gradients compared to differentiated gradients. For example, the authors could plot the cosine similarity between CHEMGUIDE's estimated gradients and those from a differentiable regressor across different timesteps of the diffusion process\", \"**Explain which guidance method is suitable for which property**: The authors observe that noisy and clean guidance methods appear complementary, with properties poorly optimized by one often well-optimized by another (e.g., \\u03b1 vs \\u2206\\u03f5). However, the paper would be more practically useful if it provided explanations for these differences, helping practitioners choose the appropriate method for their specific use case.\"], \"questions\": [\"**clarifications**\", \"Can you clarify which aspects of your method implement SPSA? Equation 15 combines two ideas: 1) introducing random perturbations for atom coordinates from a standard normal distribution, and 2) using these perturbations in a finite-difference approximation of the gradient. It would be helpful to explicitly state which of these constitutes SPSA and how it differs from standard finite differences.\", \"What are the theoretical requirements for SPSA convergence? The paper mentions continuity of z as a requirement, but are there other conditions (e.g., smoothness, bounded variance) needed for the gradient estimates to be reliable?\", \"**suggestions**\", \"Given the extensive appendix (sections A-K), adding a table of contents at its beginning would improve navigation.\", \"Consider adding a brief discussion of computational overhead introduced by SPSA compared to standard finite differences.\", \"**nitpicks**\", \"Stability' is unnecessarily capitalized on page 6, section 4.2\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We hope the above information helps address your concerns and questions. We are happy to answer any future questions you may have. Thank you!\\n\\nWe also included additional results and summaries in <https://openreview.net/forum?id=4dAgG8ma3B&noteId=UrUPt5fBhg>.\"}", "{\"comment\": \"### **2. Comparing ChemGuide against DFT**\\n\\nThe results against DFT are provided in Table 3 - for every generated 3D geometry by our model and the baseline models, we used DFT/B3LYP/6-31G(2df,p), the same level of theory as the QM9 dataset, to get its optimized geometry (ground state) and compared their energy difference. We showed that ChemGuide-generated 3D geometries are more stable and closer to the ground-state geometries than the baseline models while also being more successful in generating valid molecules.\\n\\nIn other words, **we were able to achieve improved performance even in DFT accuracy by leveraging a lower-level GFN2-xTB theory with a reasonable cost**.\\n\\n### **3. Handle certain molecular configurations**\\n\\nOur choice of using GFN2-xTB as the non-differentiable oracle would help address the potential difficulties in handling certain molecular configurations such as large molecules. It is a semi-empirical density functional-based tight-binding method with reasonable accuracy and computational cost, and unlike neural networks, does not suffer from out-of-distribution generalization problems (see Figure 7 in Appendix H.4). When a decoded 3D geometry from step $t$ is invalid during the generative process, GFN2-xTB will catch such undesired situations. Since the estimated guidance is not informative in such cases and calculating properties for invalid conformations is theoretically not correct, we add no guidance for the invalid molecules in such cases at step $t$ and allow more steps for the model to improve by itself.\\n\\n### **4. Automated scale scheduling**\\n\\nWe noticed the need for automated scale scheduling. However, since our paper mainly focuses on introducing quantum chemistry to the generative process for molecules, we mentioned this in our future work section on page 10 line 530:\\n\\n\\u201cAdditionally, we observe that the guidance scale plays a critical role in the generative process of guided diffusion for molecules, but it is generally difficult to determine in advance. This opens the possibility for research into more effective methods for selecting guidance scales, including automated scale scheduling that optimizes guidance strength.\\u201d\\n\\nWe expect that automated scale scheduling will continue to improve ChemGuide from the results shown in Tables 1 and 2. This would require extra designing, tuning, and computational costs and is beyond the scope of this work.\\n\\n### **5. Code availability**\\n\\n**We provided access to our code in Appendix A: <https://anonymous.4open.science/r/ChemGuide> in our initial submission**, and added the link in the abstract in our current revised version.\\n\\n### **References**\\n[1] De Cao, Nicola, and Thomas Kipf. \\\"MolGAN: An implicit generative model for small molecular graphs.\\\" arXiv preprint arXiv:1805.11973 (2018).\\n\\n[2] Hoogeboom, Emiel, et al. \\\"Equivariant diffusion for molecule generation in 3d.\\\" International conference on machine learning. PMLR, 2022.\\n\\n[3] Xu, Minkai, et al. \\\"Geometric latent diffusion models for 3d molecule generation.\\\" International Conference on Machine Learning. PMLR, 2023.\"}", "{\"comment\": \"Thank you for addressing my concerns in your revised manuscript. I acknowledge the improvements made, particularly in adding new experiments in the appendix. I appreciate your efforts and will increase my score to 6.\"}", "{\"comment\": \"Thank you for taking the time to review and provide feedback! We address the weaknesses and questions as follows.\\n\\n### **1. More baselines, guidance methods & datasets**\\n#### - **Additional baselines** \\nOur ChemGuide model aims to generate **stable 3D** molecules (atoms + coordinates) with geometries close to their ground states. In our model, a generated 3D molecule is represented by its atoms and their coordinates. In comparison, MolGAN [1] was designed to generate **valid 2D** molecules with correct valency requirements. In MolGAN, a molecule is represented by a graph, with atoms as the vertices and bonds as the edges. **For molecule stability, it can be only defined on 3D structures over 2D graphs**.\\n\\nWhile MolGAN shows high potential in generating valid 2D molecules, generating **valid** 3D molecules is inherently more challenging. On the other hand, we are also targeting generating **stable** 3D molecules. To start, a valid 3D molecule must also meet the requirements for stable 2D molecules - correct valencies. In the 3D scenario, a bond is inferred from the generated coordinates by quantum mechanics. The model needs to learn the underlying quantum chemistry rules to generate a valid 3D molecule, such as the correct distance for desired bonds, the correct dihedral angles for correct configurations, etc.\\n\\nWe designed our model to generate 3D molecules that are chemically correct but also thermodynamically stable with close-to-ground-state geometries. Additionally, we tasked our model to perform conditional generation of 3D molecules with desired quantum chemical properties such as HOMO-LUMO gap and polarizability - these properties are also more intricate than the properties used in MolGAN, such as drug likeliness and solubility.\\n\\nFrom the above considerations, MolGAN cannot be used as a 3D benchmark, but we are happy to consider other 3D benchmarks if more related models can be suggested. For a more robust evaluation of ChemGuide, we selected EDM [2] and GeoEDM [3] as the baseline models - they are most relevant to our work in generating valid 3D molecules (with conditioned properties) and their results were the recent state of the art. Our stability criteria are a proper superset of the stability definitions in the baselines and thus more robust and strict (see tables 1-3). We also conducted a comprehensive comparison of conditional 3D molecule generation (see tables 4-6).\\n\\n#### - **Other existing guidance methods** \\nIn the [global official comment](https://openreview.net/forum?id=4dAgG8ma3B&noteId=UrUPt5fBhg) above, we provide more guidance results for stability optimization with **1**. neural networks for force prediction (**Appendix H3**) and **2**. evolutionary algorithms (**Appendix I**). We can observe that ChemGuide performs the best compared to different guidance methods, with mostly decreased energy above ground state percentages, which demonstrates the effectiveness of our method. \\n\\nWe would also like to note that ChemGuide is to achieve guidance in the form of gradients **directly** from quantum chemistry, there is no equivalent counterpart to strictly compare with, since neural networks are surrogate models for force prediction (trained on the labels produced by quantum chemistry) and evolutionary algorithms incur uncontrollable randomness during the evolution process. \\n\\n#### - **Additional datasets** \\nIn our results, we presented the results with the QM9 dataset and the GEOM dataset. They are two of the most widely recognized large datasets in the community and they are large: QM9 has 134K ground-state molecules and GEOM has 37M conformational molecules. Considering the scale of the dataset and the level of quantum chemical theory, QM9 and GEOM are two of the best benchmark datasets in the field, on which existing ML algorithms often struggle. Considering the computational cost, we focused more in-depth on the current two datasets to provide more comprehensive results on them rather than going in breadth across a multitude of datasets.\\n\\nIf the reviewer has recommendations for additional datasets, we would be happy to look into them.\"}", "{\"title\": \"Response to rebuttal\", \"comment\": \"I appreciate the rebuttal from the authors and the experiments extended. However, the guidance baselines are not comprehensive. Works such as [1,2] and those brought up by Reviewer tzoc are related and can be compared. I will keep the score due to this and understandings of the issues other reviewers discussed.\\n\\n[1] Amortizing intractable inference in diffusion models for vision, language, and control\\n\\n[2] Steering Masked Discrete Diffusion Models via Discrete Denoising Posterior Prediction\"}", "{\"comment\": \"Thank you very much for your engagement in the discussion and valuable suggestions to make our paper better.\"}", "{\"title\": \"To Reviewer tzoc: can you kindly move your comment to your review thread?\", \"comment\": \"Thank you for the quick response! We noticed that you are replying under the thread of **Reviewer V6At**. Your comment seems to be more relevant to your review thread and our corresponding rebuttal at <https://openreview.net/forum?id=4dAgG8ma3B&noteId=qEaB87oDLv>. We kindly ask if you could move or add your comment there (or where else you see more fit) so that potential future readers won\\u2019t get confused. We will post more follow-ups soon. Thank you!\"}", "{\"comment\": \"#### - **Evolutionary algorithm**\\n\\nWe explore using evolutionary algorithms as optimization. Details can be found in **Appendix I**. We follow [1], and the parameters of the algorithm are variant size $k$ and evolution interval $E$. During the diffusion process, the population is preserved, and evolution is performed at fixed intervals. At each evolution step, $k-1$ noise perturbations are added to the population, resulting in $k$ variants (we treat the non-perturbed molecule as a variant). The best variant is then selected as the new population based on evaluations from the non-differentiable oracle (i.e. xTB). Specifically, the oracle calculates the force RMS for each variant, and the variant with the lowest force RMS is selected. We explore several combinations of variant size and evolution interval in the below table. ChemGuide significantly outperforms it in terms of force RMS and energy above the ground state.\", \"table_3\": \"Evolutionary algorithm results of 500 molecules on QM9\\n\\n| Metric \\t| ($k=3$, $E=20$) \\t| ($k=3$, $E=50$) \\t| ($k=5$, $E=20$) \\t| ($k=5$, $E=50$) \\t| ChemGuide \\t| EDM \\t| GeoLDM \\t|\\n|-------------------------------|-------------------|-------------------|-------------------|--------------------|----------------------|------------|------------|\\n| **Force RMS (Eh/Bohr)** \\t| 0.0112 \\t| 0.0111 \\t| 0.0110 \\t| 0.0109 (-1.75%\\u2193) | **0.0104*** (-6.76%\\u2193) | 0.0114 \\t| 0.0111 \\t|\\n| **Validity** \\t| 91.60% \\t| 87.60% \\t| 90.00% \\t| **92.40%*** (2.60%\\u2191) | 91.40% (1.60%\\u2191) \\t| 86.60% \\t| 89.80% \\t|\\n| **Uniqueness** \\t| **100.00%*** \\t| **100.00%*** \\t| 99.78% \\t| **100.00%*** \\t| **100.00%*** \\t| 100.00%* | 100.00%* |\\n| **Atom Stability** \\t| 98.95% \\t| 98.76% \\t| 98.66% \\t| **99.14%*** (0.21%\\u2191) | 99.02% (0.09%\\u2191) \\t| 98.53% \\t| 98.93% \\t|\\n| **Molecule Stability** | 90.00% \\t| 87.60% \\t| 88.00% \\t| **91.20%*** (2.40%\\u2191) | 90.60% (1.80%\\u2191) \\t| 83.60% \\t| 88.80% \\t|\\n| **Energy above ground state (Eh)** | 0.0048 (-4.45%\\u2193) | 0.0053 \\t| 0.0056 \\t| 0.0054 \\t| **0.0042*** (-15.78%\\u2193) | 0.0072 \\t| 0.0050\\t|\\n\\n### **Reference**\\n[1] Huang, Y., Ghatare, A., Liu, Y., Hu, Z., Zhang, Q., Sastry, C.S., Gururani, S., Oore, S. and Yue, Y., Symbolic Music Generation with Non-Differentiable Rule Guided Diffusion. In Forty-first International Conference on Machine Learning, 2024.\"}", "{\"title\": \"Thank you for the discussion\", \"comment\": \"Thank the authors for writing the rebuttal with additional experimental results.\\n\\nIn general, I agree with the authors on the nuances of the settings of different methods, but inherently these methods are connected, for twisted SMC, they used a proposal with an additional gradient from the target but it is not necessary.\\n\\nI hate to say this but for such a popular problem with many works (either on proteins/molecules/materials or images/music/language/etc), I don't see major technical advancements this paper brings. In that sense, I am still okay with a more application-oriented paper if the authors can compare and discuss other methods well with a decent performance improvement.\\n\\nI appreciate again the effort the authors made during the rebuttal, but I will maintain my score.\"}", "{\"comment\": \"### **4. Clarification on SPSA and theoretical results**\\n\\nFor a differentiable $f: f: E\\\\in R^d \\\\rightarrow R$, finite-difference estimates the gradient at $x$ as:\\n$$\\\\hat{g}(x)=\\\\frac{f(x+\\\\epsilon)-f(x-\\\\epsilon)}{2\\\\epsilon}$$\\nand SPSA [2] estimates the gradient with some scale $\\\\mu>0$ and a zero-mean perturbation vector $\\\\Delta$ as:\\n$$\\\\hat{g}(x)=\\\\frac{f(x+\\\\mu\\\\Delta)-f(x-\\\\mu\\\\Delta)}{2\\\\mu\\\\Delta}$$\\nwhere we abuse the notation by using dividing $\\\\Delta$ to denote the element-wise operation (e.g., $\\\\Delta_i$ for each dimension $i=1, 2, \\u2026, d$)\\n\\nThe key difference between finite difference and SPSA is the requirement of $\\\\Delta$ being zero-mean, which made it possible for us to obey the zero center gravity requirement of 3D diffusion models to be equivariant to transformations. However, SPSA indeed can be seen as 1. sample perturbation from the normal distribution; 2. apply finite-difference to approximate the gradients.\\n\\nFurther, SPSA [2] requires $\\\\Delta$ to have bounded inverse moments, which rules out $\\\\Delta$ being Gaussian. In order to sample $\\\\Delta$ from the standard normal distribution, which is one of the frequently used distributions with zero mean, we follow [3] and estimate the gradient (which can be seen as a variant of the original SPSA) as follows: \\n$$\\\\hat{g}_{\\\\Delta}(x)=\\\\frac{f(x+\\\\mu \\\\Delta)-f(x-\\\\mu Delta)}{2\\\\mu}\\\\cdot \\\\Delta$$\\n\\nWe mention $z$ to be \\u201ccontinuous\\u201d in the intention as opposed to \\u201cdiscrete\\u201d variables (i.e. molecules in the molecular space), such that we can add the perturbation vector directly to $z$. In the context of gradient estimation, $f$ to be differentiable would suffice, however, if one were to use the estimated gradient $\\\\hat{g}_{\\\\Delta}(x)$ for optimization problems (e.g., finding the minimizer of $f$), [3] assumes $f$ is convex and studied the properties and convergence of such algorithms under various settings (e.g., $f$ is smooth/non-smooth)\\n\\nIn terms of the practices of such an estimation, SPSA has empirically demonstrated its effectiveness in deep learning applications, such as in Large Language Models [4], and other domains [5].\\n\\n### **5. Response to suggestions & nitpicks**\\n\\nWe have added a table of contents at the beginning of the appendix, and changed the capitalized \\u201cStability\\u201d accordingly in our current revised submission. Thank you for your detailed inputs to make our paper more accessible! \\n\\nWe include the discussion on the computation overhead of SPSA in **Appendix F** Motivation for Gradient Estimation with SPSA. There are two extra operations from SPSA: 1. sampling $U$ from Gaussian; 2. matrix multiplication with $U$, where the extra computation overhead of SPSA compared to finite-difference methods is ignorable during the diffusion process, since both operations are fast and well-optimized on GPUs in modern machine learning practices.\\n\\n### **References**\\n\\n[1] Dylan Anstine, Roman Zubatyuk, and Olexandr Isayev. Aimnet2: a neural network potential to meet your neutral, charged, organic, and elemental-organic needs. 2024.\\n\\n[2] James C. Spall. Multivariate stochastic approximation using a simultaneous perturbation gradient approximation. IEEE Transactions on Automatic Control, 37:332\\u2013341, 1992. URL https://api.semanticscholar.org/CorpusID:122365276.\\n\\n[3] Yurii Nesterov and Vladimir Spokoiny. Random gradient-free minimization of convex functions. Foundations of Computational Mathematics, 17(2):527\\u2013566, 2017\\n\\n[4] Sadhika Malladi, Tianyu Gao, Eshaan Nichani, Alexandru Damian, Jason D. Lee, Danqi Chen, and Sanjeev Arora. Fine-tuning language models with just forward passes. NeurIPS 2023\\n\\n[5] Sijia Liu, Pin-Yu Chen, Bhavya Kailkhura, Gaoyuan Zhang, Alfred O Hero III, and Pramod K Varshney. A primer on zeroth-order optimization in signal processing and machine learning: Principals, recent advances, and applications. IEEE Signal Processing Magazine, 37(5):43\\u201354, 2020.\"}", "{\"metareview\": \"The paper proposes a novel guidance technique for molecule generation. Namely, it targets generating molecules closer to their equilibrium state (with lower force magnitude) by adding the proposed guidance term to the diffusion model. The main challenge of this problem is that the objective function (evaluating the force) is not differentiable. The authors propose to address this challenge by evaluating the finite difference of the objective along a random direction. Reviewers tzoc and V6At found the empirical study insufficient but still up to the ICLR standards. On the contrary, Reviewer wXP8 extensively argued for the acceptance of the paper.\", \"additional_comments_on_reviewer_discussion\": \"Several reviewers raised concerns regarding evaluation, e.g. comparison with evolutionary algorithms (as suggested by Reviewer tzoc). The authors provided the requested experiments during the rebuttal which partially addressed the reviewers' concerns. Finally, Reviewer wXP8 championed the acceptance of the paper in the final discussion, which is an important indicator of the paper's relevance to the ICLR community.\"}", "{\"comment\": \"We hope the above information helps address your concerns and questions. We are happy to answer any future questions you may have. Thank you!\\n\\nWe also included additional results and summaries in <https://openreview.net/forum?id=4dAgG8ma3B&noteId=UrUPt5fBhg>.\"}" ] }
4ciEeIiIJ7
Let’s disagree to agree: Evaluating collective disagreement among AI vision systems
[ "Brian Cheung", "Erin Grant", "David Mayo", "Helen Yang", "Boris Katz", "Tomaso A Poggio" ]
Recent advancements in artificial intelligence (AI) have led to the development of AI vision systems that closely resemble biological vision in terms of both behavior and neural recordings. While prior research in modeling biological vision has largely concentrated on comparing \emph{individual} AI systems to a biological counterpart, our study instead investigates the collective behavior of model populations. We focus on inputs that generate the most divergent responses among a diverse population of AI vision systems, as measured by their aggregate disagreement. We would expect that the factors driving disagreement among AI systems are also causes of misalignment between AI systems and human perception. We challenge this expectation by demonstrating alignment between AI systems and humans at the \emph{population} level, even for images that generate divergent responses among AI systems. This unexpected finding challenges our understanding of the relationship between the limitations of AI systems and human perception, suggesting that even the most challenging stimuli for AI systems are reflective of human perceptual difficulties.
[ "deep learning", "representational similarity" ]
Reject
https://openreview.net/pdf?id=4ciEeIiIJ7
https://openreview.net/forum?id=4ciEeIiIJ7
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xZ292WTTck", "txBcTqwxRZ", "riSXm8vpW2", "rFfGbF8sEh", "qInmezuJdz", "pXDEHBVxAO", "dqwqIyU2h9", "dUFstK4df8", "dSFuZqVyo0", "bmviF1HwYI", "ZXBrCrIR8A", "WPzQ1Mfj8f", "U0ol0apNff", "ScDpgxB354", "S8ZldKa4kK", "Rtzg2F0dXn", "IssXK6c09m", "AwRT0rHFDe", "ADfYdH8PjA", "8oH2SzZIv7", "8lyy0JpZMD", "8JjsnT4Edj" ], "note_type": [ "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment" ], "note_created": [ 1732653555901, 1730674914672, 1730103972351, 1732723998743, 1732721857529, 1731436177540, 1731436087207, 1732727470408, 1734451246395, 1730694086140, 1732799081417, 1731442623536, 1730669108070, 1731563016136, 1730489647816, 1730673889539, 1732535154469, 1732730532177, 1731438915650, 1737523788113, 1731969922161, 1731561440009 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission6737/Authors" ], [ "ICLR.cc/2025/Conference/Submission6737/Reviewer_vUbw" ], [ "ICLR.cc/2025/Conference/Submission6737/Reviewer_XaeV" ], [ "ICLR.cc/2025/Conference/Submission6737/Authors" ], [ "ICLR.cc/2025/Conference/Submission6737/Reviewer_vUbw" ], [ "ICLR.cc/2025/Conference/Submission6737/Authors" ], [ "ICLR.cc/2025/Conference/Submission6737/Authors" ], [ "ICLR.cc/2025/Conference/Submission6737/Reviewer_vUbw" ], [ "ICLR.cc/2025/Conference/Submission6737/Area_Chair_7WZo" ], [ "ICLR.cc/2025/Conference/Submission6737/Reviewer_WTkd" ], [ "ICLR.cc/2025/Conference/Submission6737/Reviewer_vUbw" ], [ "ICLR.cc/2025/Conference/Submission6737/Reviewer_pHQG" ], [ "ICLR.cc/2025/Conference/Submission6737/Reviewer_ihcH" ], [ "ICLR.cc/2025/Conference/Submission6737/Authors" ], [ "ICLR.cc/2025/Conference/Submission6737/Reviewer_tvpj" ], [ "ICLR.cc/2025/Conference/Submission6737/Reviewer_pHQG" ], [ "ICLR.cc/2025/Conference/Submission6737/Reviewer_vUbw" ], [ "ICLR.cc/2025/Conference/Submission6737/Authors" ], [ "ICLR.cc/2025/Conference/Submission6737/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission6737/Authors" ], [ "ICLR.cc/2025/Conference/Submission6737/Authors" ] ], "structured_content_str": [ "{\"comment\": \"Thank you for the thoughtful discussion. We appreciate the comments as it improves our work and has generated follow-up analysis described below.\\n\\nDisagreement is an aggregate metric over a collection of models. The reasons for disagreement will not be uniform or unanimous among models just as they are not among a group of humans. We agree that would like to consider factors that cause or lead to disagreement. These causes can impact disagreement through in two inputs to the agreement calculation:\\n\\n1) The aggregate behavior of the population of models\\n2) The images that the population of models operate over\\n\\n> This argument seems to contradict what you were saying earlier about the unique model (architecture), not impacting the prediction.\\n\\nTo support and better understand the attributes that lead to agreement and disagreement, we have analyzed a counterfactual population by training 102 ResNet-50 models which were all trained on ImageNet with the only difference being the random seed used in initialization. This population produces nearly identical results for the ImageNet and ObjectNet dataset, (0.29 vs 0.29 correlation with MVT, 0.34 vs 0.41 correlation with ObjectNet). This is a very low diversity population of models that only vary over random seed which can replicate most of the results of the population of 1032 computer vision models which have been created over the past decade by the community. This indicates that most of the explainable variance in agreement/disagreement is not caused by variations in the model architecture nor differences in the training data.\"}", "{\"summary\": \"This paper assesses the disagreement among a population of artificial vision systems (1032 models) and compares it with the disagreement among a population of humans (42 human participants). Unlike previous works, populations of agents and humans are compared on a collective level, instead of an individual level. The paper aims to prove that factors that cause disagreement among AI systems coincide with the factors that cause human disagreement, at a population level.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The paper has the following (potentially) strong points:\\n\\n1. The paper assesses the overlap between AI vision models and human disagreement on a collective/population level, rather than an individual level. This is an original approach as far as I know. The assumption is that by identifying patterns in how populations of AI models fail similarly to humans, training methods or architectures that handle difficult stimuli could be developed, and thus improve model robustness and interpretability. The proposed many-to-many comparison is something worth considering in the future, alongside already-established measures.\\n\\n2. This study models the largest population (afaik) of artificial vision models, spanning 1032 AI models with various architectures, pretraining regimes and data. Such a population should provide a comprehensive view of collective disagreement. However, how each of these models influences the collective disagreement is not discussed enough, but could have been a point to add more value to the paper.\\n\\n3. It aims to uncover and highlight common factors between humans and artificial models of vision that cause difficulty in object recognition.\", \"weaknesses\": \"This work presents the following weaknesses:\\n\\n1. My first concern is related to the assumption from which the paper starts (L19) about the \\u201c factors driving disagreement among AI systems are also causing misalignment between AI systems and humans perception\\u201d - why would that be the case? It states that the current study challenges (L484) \\u201cthe assumption present in prior work that disagreement among AI systems is unrelated to human visual processing\\u201d. But this assumption (L484) is not adequately founded, or at least not supported through the references provided which do not claim that disagreement between artificial models is unrelated to human visual processing. To reinforce, the initial assumption is not adequately discussed or supported by the correct references making it difficult to understand the motivation of the paper in the first place. \\n\\n\\n2. For a study comparing human and artificial visual systems, the authors might want to consider the body of literature that draws from neuroscience to better understand how convolutional neural networks (CNNs) could model early visual processing pathways [e.g. A Unified Theory of Early Visual Representations from Retina to Cortex (Lindsey et al., 2019); Spatial and Colour Opponency in Anatomically Constrained Deep Networks (Harris et al. , 2019)]. Such works aim to understand the similarities between human visual systems and artificial models at the lower level of neurons and how the functional and structural layouts of biological visual systems could better inform DNN architectures.\\n\\n3. While the idea of comparing many to many is interesting and could add value on top of accuracy and one-to-one error consistency measures, the experimental setup seems to be (visually) ill-posed. For instance, the challenging examples are complex scenes, e.g. Figure 12, in which the label corresponds to just one small part of the scene. It should not be surprising that both humans and machines have difficulty in correctly identifying the target class in these cases. But it is not justified to use this as a basis to say that machines and humans are making mistakes in the same kind of way - it is much more nuanced than that. \\n\\n4. While the assessment in Fig 6 aims to show the proportion of human-annotated top visual attributes, it is unclear on an instance level how and why humans and artificial models reach (dis)agreement. Take for example the cases where the model makes random kinds of predictions humans clearly would not. For example, Figure 3c is clearly not a roof tile, a scorpion, or a sandal - no human would guess any of those, although they could still be wrong of course.\", \"questions\": [\"In light of the previous comments, I think the main actionable points are:\", \"the motivation of the paper needs to be reconsidered and clarified\", \"so does the conclusion and interpretation of results, in particular, I would recommend more carefully interpreting the similarities between humans and artificial models.\"], \"further_clarification_is_also_needed_on\": [\"Figure 1 - the interpretation of the histograms for model and human agreement (\\u201chistograms along each axis reflect the proportion of images at each marginal agreement level\\u201d). The caption states there is a positive correlation but does not state how this conclusion is reached. Later on, Table 1 provides some values but the exact method for reaching those values is missing. Visually the histograms do not seem positively correlated, but again clarifying in text would be better.\", \"Details of the pretraining of each model, or at least grouped per family of models (maybe grouped by architecture type) used in this analysis would have been relevant. Also, further discussion and interpretation of results, again grouped per family of models could have added value to this paper. For example, how do different model architectures contribute to the level of disagreement?\", \"Again, for clarity, it would be good to state clearly how the values for correlation between model agreement and the human behavioural measures (Table 1) are computed.\", \"Line 432 - What is this subset of selected models? Based on what criteria were these models selected?\", \"Regarding low-agreement images, it would be interesting to assess the factors that cause disagreement at certain levels of accuracy. Are these factors maintained, and what factors remain/are discarded as the acceleration of agreement occurs (as per L440-442)?\", \"Finally, I think a section on the limitations of this study should be included. For example:\", \"the limited number of human participants might not reflect the full spectrum of human visual perception\", \"how does approximating perceptual abilities to population disagreement lead to overlooking specific, individual visual factors?\", \"is Fleiss\\u2019 Kappa the most suitable measure and are there any other agreement measures that could be explored instead?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The article explores the disagreement behaviors of AI vision systems, diverging from traditional approaches that compare individual AI models to biological vision. Instead, this study investigates patterns of agreement and disagreement among a diverse population of AI models by measuring \\\"aggregate disagreement\\\" across model outputs. It aims to determine which inputs produce the most divergent responses among models and assesses whether these inputs also create discrepancies between AI systems and human perception.\\nA significant finding is that even images causing high disagreement among AI models often align with human perceptual challenges. This alignment suggests that the limitations in AI models mirror similar perceptual difficulties in humans, offering valuable insights into AI-human vision comparisons at a population level. This work contributes to the field by reframing disagreement not as an intrinsic limitation of AI systems but as an opportunity to study the shared perceptual challenges between artificial and human vision systems.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1.Innovative Research Topic:\\nThe authors investigate an intriguing and novel research area by examining AI model and human visual disagreements at a population level. This approach is unique in that it moves beyond individual model comparisons to analyze the collective behavior of AI vision systems.\\n2.New Method for Measuring Human-AI Discrepancy:\\nBy introducing a method to measure disagreement at the population level, the study provides a new way to quantify the difference between AI models and human perception, adding a meaningful metric to the field.\\n3.Focus on Naturalistic Stimuli:\\nUnlike prior work that often uses synthetic stimuli, this study investigates the properties of naturalistic stimuli that elicit the most disagreement among AI models, making its findings more applicable to real-world scenarios.\\n4.Insights into AI-Human Perceptual Alignment:\\nThe article provides evidence suggesting that disagreements among AI systems are influenced by aspects of human visual perception, particularly in image difficulty, as measured by human behavioral data. This insight supports the idea that individual differences in AI vision systems may reflect differences in human visual processing rather than inherent AI limitations.\", \"weaknesses\": \"1.Limited Analysis of Outlier Cases:\\nThe authors report correlations between model agreement and human behavioral measures, but they do not analyze specific cases where model agreement is high but human difficulty is low, or vice versa. Such an analysis could provide deeper insights into unique points of divergence.\\n2.Lack of Architecture-Specific Insights:\\nAlthough multiple model architectures are included in the study, the authors do not analyze how different architectures impact the results. This oversight limits the understanding of how architectural variations might contribute to AI-human agreement or disagreement on challenging stimuli.\\n3.No Exploration of Methods to Reduce Disagreement:\\nWhile the study highlights greater disagreement on images of higher human difficulty, it does not explore whether certain methods, such as targeted model adjustments or expanded training datasets, could reduce this disagreement and improve alignment with human perception.\\n4.Insufficient Citations of Related Work on AI-Human Disagreement:\\nPrior research has shown that there are links between AI-human disagreement and human visual processing at the individual model level, yet the authors do not reference these foundational works. Including these citations could strengthen their arguments by situating the study within the existing body of research.\", \"questions\": \"1.Did the authors consider analyzing cases where model agreement is high but human difficulty is low, or where model agreement is low but human difficulty is high? Such cases might offer valuable insights into the nuanced differences between AI model behavior and human perception.\\n2.Although multiple architectures were included, why did the authors not explore the impact of different architectures on the experimental results?\\n3.Can the higher disagreement on challenging human images be reduced through specific adjustments to models or training datasets?\\n4.Previous research has shown links between AI-human disagreement and human visual processing at the individual model level. Why were these relevant studies not carefully discussed in the related work section?\\n\\nIf the authors can address these issues, I would be happy to raise my score.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for continuing the discussion in improving this work.\\n\\n> And going back to my main concern about your claim that humans and machines make mistakes in the same kind of way and why that is not true. \\n\\nThe minimum viewing time data we use from \\\\cite{mayo2023hard} presents the minimum time necessary for a majority humans to correctly classify an image. We have done further analysis to decompose Figure 1 by images with different viewing times and find **disagreeable images take more time to recognize**. This trend can be seen to some degree in Figure 4, but the property becomes much more apparent simply by decomposing Figure 1 into different minimum viewing times. \\n\\nWe would like to highlight that population disagreement is not the same concept as mistakes. Disagreement is more akin to image hardness, which explains why the most disagreeable images actually require more time to recognize rather than less. We show there is there is an intrinsic notion of disagreement that model populations capture as well as human populations. And with the new experiments with non-diverse populations, this intrinsic disagreement does not appear to be a function of variability on model architecture or variability of training data.\\n\\n\\n@article{mayo2023hard,\\n title={How hard are computer vision datasets? Calibrating dataset difficulty to viewing time},\\n author={Mayo, David and Cummings, Jesse and Lin, Xinyu and Gutfreund, Dan and Katz, Boris and Barbu, Andrei},\\n journal={Advances in Neural Information Processing Systems},\\n volume={36},\\n pages={11008--11036},\\n year={2023}\\n}\"}", "{\"comment\": \"Thanks for getting back with clarification of this experiment.\\n\\n> This is a very low diversity population of models that only vary over random seed which can replicate most of the results of the population of 1032 computer vision models which have been created over the past decade by the community. This indicates that most of the explainable variance in agreement/disagreement is not caused by variations in the model architecture nor differences in the training data.\\n\\nVery well. My question is what causes the variance if it's not model architecture or training data? You already pointed to references that discussed model architecture not having an impact (Muttenthaler 2022, Conwell 2024), so I do not understand if you are only confirming their finding or claiming this as novel.\\n\\nAnd going back to my main **concern** about your claim that humans and machines make mistakes in the same kind of way and why that is not true. Think, for example, of shortcut learning (Geirhos, 2020). The examples are very illustrative of the difference between machine and human strategies. DNNs correctly classify cows with a grass background, but not such much with an unusual background such as the sea. On the other hand, humans recognise cows based on their shape/pattern/colour and while they could still be influenced by context, it is not such an important factor.\\n\\n*Geirhos, R., Jacobsen, J.H., Michaelis, C., Zemel, R., Brendel, W., Bethge, M. and Wichmann, F.A., 2020. Shortcut learning in deep neural networks. Nature Machine Intelligence, 2(11), pp.665-673.*\"}", "{\"title\": \"Also thank you for the review\", \"comment\": \"We're responding to the reviews as timely as possible because the points you have made are important and we would like to discuss further.\"}", "{\"title\": \"Discussion of some key concerns\", \"comment\": \"> **Motivation isn't that convincing**\\n\\nWhile the actual notion of disagreement among a population of models has not been measured before our submission, it has been an explicitly stated assumption that the mistakes that AI models make are distinct from the mistakes that humans make. For instance, [Geirhos et al. (NeurIPS 2020)](https://arxiv.org/abs/2006.16736) make the points:\\n\\n\\\"The consistency between CNNs and human observers, however, is little above what can be expected by chance alone\\u2014indicating that humans and CNNs are likely implementing very different strategies.\\\"\\n\\n\\\"We conclude that there is a substantial algorithmic difference between human observers and the investigated sixteen CNNs: humans and CNNs are very likely implementing different strategies.\\\"\\n\\n\\u201cCohen\\u2019s $\\\\kappa$ for CNN-human consistency is very low for both models (`.068` for ResNet-50; `066` for CORnetS) compared to `.331` for human-human consistency.\\u201d\\n\\nFurthermore, [Geirhos et al. (NeurIPS 2018)](https://arxiv.org/abs/1808.08750) make the point:\\n\\n\\u201cAdditionally, we find progressively diverging patterns of classification errors between humans and DNNs with weaker signals.\\u201d\\n\\n> **the training data is also a product of human reaction to ambiguity**\\n\\nAlthough the labels come from humans, the labels this model population sees are unanimous amongst all models. Therefore, we see that despite all models being trained to provide the same labeling over the training set, they disagree on held out images in a way that is similar to human populations.\"}", "{\"comment\": \"I believe you are not responding to my comment, but rather taking the discussion in a completely different direction. So before we proceed any further, could you please clarify what you understand by **disagreement** among model population?\\n\\nThis confusion goes back to my request to motivate and convincingly explain the assumption that this paper starts from (L19-20). In your very first reply, you gave the following answer:\\n>While the actual notion of disagreement among a population of models has not been measured before our submission, it has been an explicitly stated assumption that the mistakes that AI models make are distinct from the mistakes that humans make.\"}", "{\"metareview\": \"After careful consideration of the six expert reviews and the subsequent author-reviewer discussion, I recommend rejecting this submission. While the paper presents an interesting analysis of population-level disagreement between AI vision systems and human perception, several fundamental concerns remain unresolved despite the authors' partial engagement with reviewer feedback.\\n\\nThe paper's central contribution examines how populations of AI models and humans show collective disagreement on certain visual stimuli. The authors suggest that this reveals an unexpected alignment between AI and human perception, particularly for challenging images. However, multiple reviewers questioned the novelty and significance of this finding. As Reviewer pHQG articulated, the observation that ambiguous images are difficult for both humans and models is well-known, and the correlation in disagreement patterns may simply reflect inherent image ambiguity rather than revealing meaningful mechanistic similarities.\\n\\nThe theoretical foundation and motivation of the work drew substantial discussions. While the authors cited prior work suggesting differences between human and AI visual processing strategies, reviewers noted this does not fully justify the paper's core assumption about disagreement patterns. Reviewer vUbw highlighted how the paper conflates different types of differences between human and machine vision - strategic differences versus simple classification errors. This distinction wasn't adequately addressed in the authors' responses.\\n\\nThe experimental methodology, while extensive in using over 1,000 AI models, raised concerns about interpretation. The authors' additional analysis showing similar results with 102 ResNet-50 models varying only in random seeds is interesting but, as pointed out in the discussion, does not clearly establish what drives the observed disagreement patterns if not architectural or training differences. The paper lacks a clear explanation of the causal mechanisms underlying the reported correlations.\\n\\nSeveral reviewers also noted that the paper's practical implications remain unclear. As Reviewer tvpj emphasized, the results don't obviously point toward improvements in machine vision systems or deeper understanding of human vision. While the authors provided some responses about the value of studying edge cases, they didn't fully address how these findings could concretely advance the field.\\n\\nThe limited engagement with reviewer feedback is also concerning. While the authors had productive discussions with some reviewers, they left several comments unaddressed. The selective response pattern suggests that some issues with the work's motivation, interpretation, and significance remain unresolved.\\n\\nWhile the paper presents an interesting analytical approach and extensive experimental work, the combination of unclear theoretical foundations, questionable novelty, and limited practical implications make it in an incomplete state for publication at ICLR 2025. For any future submission, the authors are recommended to focus on establishing clearer causal mechanisms, better distinguishing their findings from known phenomena, and more concretely demonstrating the work's significance for advancing either machine or human vision understanding.\", \"additional_comments_on_reviewer_discussion\": \"During the rebuttal period, there was extensive discussion between reviewers and authors about several fundamental aspects of the paper. A key concern raised by multiple reviewers (vUbw, pHQG) centered on the paper's core assumption that disagreement among AI systems indicates an inherent limitation rather than reflecting human perceptual challenges. While the authors attempted to address this by citing prior work suggesting differences between human and AI visual processing strategies, these reviewers found the response less convincing, noting that the paper potentially conflates different types of differences between human and machine vision.\\n\\nSeveral reviewers questioned the novelty and significance of the findings. Reviewer pHQG pointed out that the correlation between human and model disagreement on ambiguous images could simply reflect inherent image ambiguity rather than revealing meaningful mechanistic similarities. The authors responded by providing additional analysis using 102 ResNet-50 models with varying random seeds, showing similar disagreement patterns. However, this analysis, while interesting, did not fully address what drives these patterns if not architectural or training differences.\\n\\nThe authors engaged with concerns about experimental methodology and interpretation raised by Reviewer ihcH, clarifying that their human study included 2,647 participants rather than just 42, and explaining how edge cases could reveal important insights about visual intelligence. However, they did not address several other methodological concerns raised by other reviewers.\\n\\nMore fundamental questions about practical implications and future directions, raised particularly by Reviewer tvpj, received limited response. The authors' discussion of edge cases as \\u201coptical illusions for visual intelligence\\u201d did not fully satisfy concerns about how these findings could concretely advance either machine or human vision understanding.\\n\\nThe authors' engagement with reviewer feedback was selective, with some reviewers receiving detailed responses while others' concerns went unaddressed. This pattern of incomplete engagement suggests that several fundamental issues with the work's motivation, interpretation, and significance remain unresolved. In weighing these factors, the limited response to crucial theoretical and practical concerns, combined with the inability to convince reviewers of the work's novelty and significance, influenced the final rejection decision.\"}", "{\"summary\": \"The paper compares the collective behaviour of 1,032 AI vision systems with 42 humans in annotating images, investigating how various visual factors influence agreement levels. It highlights that images that are challenging for the AI systems often pose similar difficulties for humans. The paper suggests that there is an alignment in visual complexity across both groups. The study quantifies (dis)agreement among AI systems and compares the results with human annotations. Additional factors such as difficulty score, minimum viewing time, and specific visual properties are examined. This approach offers insights into common challenges shared by AI and human perception.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"The comparison between model performance and human annotations is interesting and insightful.\", \"weaknesses\": [\"The paper is difficult to follow\", \"The motivation and contributions of the paper is not clear\", \"The paper lacks novelty, as it mainly consists of a comparison between the performance of machine learning models and human annotators. Reader may expect a novel methodology to be derived from these analyses.\", \"The paper lacks a discussion about the limitations and potential directions for future work\"], \"questions\": [\"It is unclear why the authors concluded from Figure 1 alone that the stimuli causing the most agreement/disagreement among AI systems also cause the most agreement/disagreement among humans. Although the figure shows the agreement levels, it lacks specific information on the stimuli that contributed to such obtained outcomes\", \"In Table 1, what is the motivation behind comparing the models agreement with the human viewing time and the difficulty score?\", \"It is unclear why the authors concluded from Table 1 that ObjectNet is more challenging for both humans and the models?\", \"I would recommend to provide a correlation measure for Figure 5.\", \"Do you expect any bias in human annotations?\", \"In Figure 6, How did you determine the visual factors for the models?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N/A\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for the response. I will keep my score because my concerns regarding the motivation and assumptions in the paper were not solved. I appreciate the approach to investigating population disagreement. However, as I already said and as other reviewers also noted it is a well-known fact that ambiguous images that are known to be difficult for models can also be difficult for humans, but that can be for many different reasons. I encourage you to keep investigating and to more clearly interpret and present your results.\"}", "{\"title\": \"Thanks for the comments - score not changed.\", \"comment\": [\"The quotes from Geirhos et al. are mainly about strategies. It is fair to say that the quote \\\"the consistency between CNNs and human observers, however, is little above what can be expected by chance alone\\\" and fig 1 in Geirhos et al. are about mistakes, not just strategies, and it does raise questions that we observe human-model consistency that seems driven by the image/label rather than random chance. However, that means your paper demonstrates the need for context in Geirhos et al. It does not mean that the consistency you show is nonobvious or a significant contribution - that's a separate question.\", \"So, let's come to that question and your second response point above. You said something similar to reviewer vUbw - \\\"humans and machines are potentially challenged in the same way by the same images. We think this is not obvious at the population level because all the models were individually trained/fine-tuned on the same labels, so there's no ambiguity in their training, but ambiguity and disagreement nonetheless arises. And that ambiguity and disagreement appears to be aligned with populations of humans.\\\" And to me, \\\"Although the labels come from humans, the labels this model population sees are unanimous amongst all models. Therefore, we see that despite all models being trained to provide the same labeling over the training set, they disagree on held out images in a way that is similar to human populations.\\\"\", \"You're right to say that the models were trained on the same labels for each image and that takes away one source of ambiguity.\", \"However, my point is that when it comes to ambiguous images, you'll have groups of images in the training dataset that contain similar features (along with some different ones), but have different labels, and groups that have the same labels but different features (along with some similar ones). That is another source of ambiguity, so \\\"all the models were individually trained/fine-tuned on the same labels, so there's no ambiguity in their training\\\" seems false.\", \"Not only is this a source of ambiguity, it's a well-known one. And not only is it well-known, I think it's the one driving your results.\", \"I like the idea of investigating populations and your approach to experimentation. I also think the paper is well-written and visualized. However, I don't think you've found something nonobvious yet, and would encourage you to keep investigating. I agree with reviewer vUbw that more careful interpretation of similarity is necessary. I'll maintain my score.\"]}", "{\"summary\": \"The paper brings a new point from population-level comparisons between AI and human vision systems, different from the previous individual Ai and human comparison. The authors conduct experiments using a large population of 1032 models and a previous user study with 42 human participants. They use Fleiss' kappa to quantify the level of agreement and find out a few interesting points on the correlation between AI model (dis)agreement and human (dis)agreement. They claim that the low agreement on hard images is due to intrinsic perceptual challenges shared by both AI and humans instead of model structure limitations.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"The strengths:\", \"brings a novel view from population-level comparison of AI and human on vision systems.\", \"conduct extensive experiments on a large population AI models\", \"Interesting findings on AI models not perform well on difficult images due to perceptual challenges that human faces as well.\"], \"weaknesses\": [\"Weaknesses of this paper include:\", \"Some findings are quite intuitive, for example, the correlation between AI (dis)agreement and human (dis)agreement. This probably is due to the labels are created by humans.\", \"42 participants from user study might be a bit bias. May conduct a few more user studies and combine with previous data.\", \"The image style does not look very good, some images are taking too many spaces but contain relatively few contents.\", \"at line 402, \\\"Images at low agreement levels are produce...\\\", should be \\\"... are producing...\\\"\"], \"questions\": [\"In Fig 1, it is a bit surprising that there are very few images with high human agreement from the top histogram, which means humans rarely have full agreement on images. Could you explain possible reasons behind this?\", \"If humans and AI cannot recognize the difficult images or the edge-case images, it means vision alone cannot solve the problem and we probably do not have a better solution using only vision. What other benefits could it bring to us if we study more on the difficult images? In other words, how does studying the edge-case images help?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"> But it is not justified to use this as a basis to say that machines and humans are making mistakes in the same kind of way - it is much more nuanced than that.\\n\\nCould you clarify what is meant by nuanced here?\\n\\n> While the assessment in Fig 6 aims to show the proportion of human-annotated top visual attributes, it is unclear on an instance level how and why humans and artificial models reach (dis)agreement. Take for example the cases where the model makes random kinds of predictions humans clearly would not. For example, Figure 3c is clearly not a roof tile, a scorpion, or a sandal - no human would guess any of those, although they could still be wrong of course.\\n\\nKeep in mind that the percentage of models making those labels is far smaller (6% of models) for low agreement images than high agreement images (100% of models). So the model driven factors that lead to the labels of low agreement amongst models at the individual image level do not have consistency over a population of models by definition of the agreement metric. Based on the agreement metric, the prediction amongst models is very diverse and each model is making a prediction for reasons that are unique to that model amongst the 1032 models.\"}", "{\"summary\": \"This paper investigates correlations between populations of humans and object-recognition systems on object-classification disagreements. The results show that there is significant correlation between human and model population disagreements, as well as between human minimum viewing time and model disagreements. The results support the hypothesis that this correlation is driven by aspects of human visual perception that makes certain aspects of images difficult to classify.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The experiments seem solid and the results are well-presented. The authors tested over 1,000 different models, including CNNs, ViTs, and hybrid models. The paper goes more deeply than just giving correlation statistics, and investigates what features low-agreement images have in common.\", \"weaknesses\": \"I'm not sure how useful these results are, either for understanding human or machine vision, or for improving machine vision systems. A useful result would point in a new direction for experiments (to better understand underlying mechanisms) and/or architectural improvements. But what are the next steps with these results? The authors did not address this or make the case that these results are important for the field.\", \"the_paper_states\": \"\\\"In this work, we challenge the assumption that disagreement among AI systems is intrinsic to these systems and unrelated to aspects of human visual processing\\\". But what are the citations for this assumption?\\n\\nI didn't understand, in second paragraph, how this assumption \\\"aligns with standard approachs for comparing internal representations of AI and biological vision, such as representational similarity analysis\\\" or how it is \\\"explicit in behavioral extrapolation tests\\\" -- this needs better explanation.\", \"questions\": \"The paper states: \\\"AI systems might be more sensitive to background variations than humans and human population are more likely to disagree when pattern variations are present\\\". Explain what \\\"pattern\\\" refers to here.\\n\\nWhen giving models' accuracy on ImageNet and ObjectNet datasets, are you using top-5 or top-1 accuracy? What about for humans?\", \"figure_7\": \"What is \\\"Bin Mean Accuracy\\\"?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper attempts to establish similarity between artificial and biological vision by showing that populations of AI models and populations of humans show intra-group disagreement on the same stimuli. It motivates itself by claiming that prior work shows disagreement among models being a function of limitations in their development, rather than expressions of an underlying mechanism in both AI and human vision.\\n\\nThe paper defines agreement as Fleiss' $\\\\kappa$ for an image, calculated over a population of vision systems. It surveys ~40 humans and ~1000 models, trying CNNs, ViTs, and hybrids and varying model size, dataset size, and training methods (pretraining and finetuning). It also uses human minimum viewing time and difficulty score as comparison metrics.\", \"results_show\": \"- All metrics appear to correlate with model agreement in intuitive ways - not strong correlations, but significant and all in the intuitive direction\\n- The clearest relationship is for low-difficulty high-model agreement images \\nThe paper takes human-annotated visual attributes from the ImageNet-X dataset, in which humans annotated what aspects of an image make it difficult to classify. The paper showed that for both low-human agreement and low-model agreement images, the percent of images with each top difficulty factor shows similar relative influence - the percentage of images for each factor decreases in mostly the same order for both humans and models. The most influential factors are found to be background, pose, color, pattern, and \\\"smaller\\\". \\n\\nThe paper also shows that model agreement increases as accuracy increases. \\n\\nThe paper then positions itself against other error analysis-related works, works that use synthetic stimuli to assess differences, and metamers (this being an opposite of a metamer).\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"### Quality\", \"Good problem setup: well-defined, statistical choices make sense, and experiences overall make sense (I will list a couple exceptions in the weaknesses)\", \"Good application of ImageNet-X to get systematic error analysis on naturalistic images\", \"Comparing to a population of models seems promising\", \"### Clarity\", \"Writing style is very clear. I rarely felt confused when reading the paper, and the structure made sense.\", \"Figures are well-designed. They are the most useful aspect for building intuition about the results - they look good, and show the right concepts.\", \"Explanation of Fleiss' $\\\\kappa$ helps build intuition for what \\\"agreement\\\" means, and also helps strengthen the experimental design choices\"], \"weaknesses\": [\"### Quality\", \"#### Problem\", \"Motivation isn't that convincing - the paper claims that the typical assumption around model errors is \\\"intrinsic to these systems and unrelated to aspects of human visual processing.\\\" But that isn't always the case - I think ambiguous images (which seem to be the crux of this paper) are not only known to be difficult for models just as they are difficult for humans, but are easily cited by most researchers as a cause of model error and likely disagreement\", \"The paper also claims evidence that \\\"disagreement among AI vision systems is driven by aspects of human visual perception, particularly image difficulty\\\" - it's worth nothing that classifications are a human concept, not an inherent property of the image, and training data reflects that. Maybe the paper isn't directly making this claim, but it seems that it's suggesting there are similar mechanisms between models (at least model populations) and humans that drive disagreement; I'd argue that these images are simply actually ambiguous, the classification is a product of human reaction to ambiguity, the training data is also a product of human reaction to ambiguity, and the model directly encodes that rather than showing an interesting emergent behavior.\", \"Data on variations of models is limited to a list in the appendix - would be good to be given a structured representation of the variations in a table\", \"#### Results\", \"Though the correlation coefficients are nontrivial and the figures line up with them, and I wouldn't expect strong correlations for such a high-dimensional problem, the figures do show a lot of spread.\", \"This also make the results seem less surprising - from both this and figure 6, where we see the factors being \\\"background\\\",\\\"pose\\\", \\\"color\\\", \\\"pattern\\\", and \\\"smaller\\\", it seems that the difficult images are simply truly ambiguous. It's not a matter of ML fallibility, but I wouldn't expect it to be. It's also not an underlying surprising mechanism in human vision that makes humans fallible on them. The images are ambiguous and the humans who labeled them probably weren't completely sure what to label them. Even if we call it a shared mechanism/underlying principle of human vision, it's not surprising or unknown.\", \"It makes sense that agreement increases as overall accuracy increases, but this is really not surprising. It could be that there are cases where models all classify the image as the same wrong class, but just given how training works, it's likely the original image is misclassified (or the original assumption is true). In either case, this doesn't offer an alternative to an explanation to the original assumption.\", \"### Clarity\", \"Would help to have an explanation of why Fleiss' $\\\\kappa$ is a good measure of agreement, really just intuition on how it works.\", \"Sections 3.1 and 3.2 don't need to be there - they explain concepts that are immediately clear from the figures.\", \"More descriptive statistics on the figures would help understand how predictive the results are.\", \"### Originality and significance\", \"I haven't seen this framing of this problem. However, the concept itself - that ambiguous images are difficult for both humans and models - doesn't seem novel. It also doesn't seem to warrant this much formalization.\"], \"questions\": [\"I am curious how these experiments would fare for top-5 classification - possibly for humans, not just models\", \"In figure 6, how should we factor in the difference in proportions between models and humans, even if the order of proportions is mostly the same? I realize you're not making this claim, but if we want to establish similar underlying mechanisms, we'd need to deal with the differences in proportion for each factor. What might this imply for future studies?\", \"\\\"Images at low agreement levels are produce significantly lower Fleiss' $\\\\kappa$ than high agreement and all images, even for models at high performance levels\\\" - I thought that agreement is *defined* as Fleiss' $\\\\kappa$. Am I misinterpreting? Is the point that even when models are split and Fleiss' $\\\\kappa$ is recalculated, it is low for the images that had low Fleiss' $\\\\kappa$ across all models? That would be more meaningful, though continues to point to images that are simply ambiguous.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Follow-up discussion - score not changed\", \"comment\": \"Thank you for addressing my comments.\\n\\n>While the actual notion of disagreement among a population of models has not been measured before our submission, it has been an explicitly stated assumption that the mistakes that AI models make are distinct from the mistakes that humans make. For instance, Geirhos et al. (NeurIPS 2020) make the points [...]\\n\\nI would like to start the discussion from here because I believe the authors still mix up **errors/mistakes** models and humans make and their **learning strategy**. The need for context in Geirhos et al 2020, as reviewer pHQG also mentioned, might be missing as they focus on an individual model vs human comparison. Hence, as I said in my initial review, it might be a useful addition. But this paper's results, as presented at this point do not carry significant information. I would encourage the authors to keep experimenting and trying to disseminate the contribution of how and what population disagreement (alone) provides value.\\n\\n> We agree with this point. We are not saying that humans are making mistakes in the same kind of way, but humans and machines are potentially challenged in the same way by the same images.\\n\\nThis is exactly my point, how could you conclude your evaluation on a population level that models are challenged in the same way as humans? For example take your Fig 6 in which even though the trend looks similar, there is an almost 10% difference in background (and for pattern) between the models and the humans.\\n\\n\\n>This was the original motivation for finding the 'disagreeable' stimuli, the variability of predictions amongst different models would help distinguish unique characteristics of potentially related to architecture. But to our surprise, as described in the introduction, the prediction variability among the model group was similar to the variability of human group.\\n\\nMy question here is how does that inform further studies? What should we do with these findings, which are neither convincing nor surprising? As I said in my example (Figure 12, in which the label corresponds to just one small part of the scene) it is very difficult to say why in this case models and humans made mistakes and whether it is for the same reason, or not.\\n\\nDiscussing and interpreting the properties of images that elicit the most disagreement within the model population would've been the most interesting part (also related to my point on *nuanced*). What should we do about those challenging factors? \\n\\nAnd in line with this, let's discuss your other point...\\n\\n>Keep in mind that the percentage of models making those labels is far smaller (6% of models) for low agreement images than high agreement images (100% of models). \\n\\nThis is not at all made clear in the paper.\\n\\n>So the model driven factors that lead to the labels of low agreement amongst models at the individual image level do not have consistency over a population of models by definition of the agreement metric. Based on the agreement metric, the prediction amongst models is very diverse and each model is making a prediction for reasons that are unique to that model amongst the 1032 models.\\n\\nThis argument seems to contradict what you were saying earlier about the unique model (architecture), not impacting the prediction.\\n\\nTo conclude my remarks, I will keep my score but do encourage the authors to clarify their motivation in line with available literature and to further experiment and provide interpretation for the driving factors causing disagreement and how this fits in, if at all, with the similarity between human and model perception.\"}", "{\"comment\": \"Apologies for the confusion. Disagreement is defined per image in Equation 2. Based on the equation, it measures how \\\"spread out\\\" the predictions are between the population of models. The more spread out the predictions, the lower the agreement.\\n\\n> This confusion goes back to my request to motivate and convincingly explain the assumption that this paper starts from (L19-20). In your very first reply, you gave the following answer:\\n\\nThe answer was pointing out that in the pair-wise error consistency metric from [Geirhos et al. (NeurIPS 2020)](https://arxiv.org/abs/2006.16736) was focused on the \\\"trials where the decision makers agree do not provide much evidence for distinguishing between processing strategies. In contrast, the (few) errors of the decision makers are the most\", \"informative_trials_in_this_respect\": \".\\\"\\n\\nWe do not mean to conflate that mistakes are the same as measures of agreement. Agreement is more analogous to prediction consistency. Our goal in L19-20 is to say that disagreement or lack of prediction consistency amongst a model population does not need to necessarily deviate from the lack of prediction consistency of a human population. We're open to suggestions on rewriting this to illustrate more clearly and thank you for the effort in discussion.\"}", "{\"comment\": \"We value your thorough review and we're responding as timely as possible because the points you have made are important and we would like to discuss further.\\n\\n> **It should not be surprising that both humans and machines have difficulty in correctly identifying the target class in these cases. But it is not justified to use this as a basis to say that machines and humans are making mistakes in the same kind of way - it is much more nuanced than that.**\\n\\nWe agree with this point. We are not saying that humans are making mistakes in the same kind of way, but humans and machines are potentially challenged in the same way by the same images. We think this is not obvious at the population level because all the models were individually trained/fine-tuned on the same labels, so there's no ambiguity in their training, but ambiguity and disagreement nonetheless arises. And that ambiguity and disagreement appears to be aligned with populations of humans.\\n\\n> **My first concern is related to the assumption from which the paper starts (L19) about the \\u201c factors driving disagreement among AI systems are also causing misalignment between AI systems and humans perception\\u201d - why would that be the case?**\\n\\nWhile the actual notion of disagreement among a population of models has not been measured before our submission, it has been an explicitly stated assumption that the mistakes that AI models make are distinct from the mistakes that humans make. For instance, [Geirhos et al. (NeurIPS 2020)](https://arxiv.org/abs/2006.16736) make the points:\\n\\n\\\"The consistency between CNNs and human observers, however, is little above what can be expected by chance alone\\u2014indicating that humans and CNNs are likely implementing very different strategies.\\\"\\n\\n\\\"We conclude that there is a substantial algorithmic difference between human observers and the investigated sixteen CNNs: humans and CNNs are very likely implementing different strategies.\\\"\\n\\n\\u201cCohen\\u2019s $\\\\kappa$ for CNN-human consistency is very low for both models (`.068` for ResNet-50; `066` for CORnetS) compared to `.331` for human-human consistency.\\u201d\\n\\nFurthermore, [Geirhos et al. (NeurIPS 2018)](https://arxiv.org/abs/1808.08750) make the point:\\n\\n\\u201cAdditionally, we find progressively diverging patterns of classification errors between humans and DNNs with weaker signals.\\u201d\", \"title\": \"Discussion of some key concerns\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"We appreciate the clear and insightful review. We would like to address some comments:\\n\\n> Some findings are quite intuitive, for example, the correlation between AI (dis)agreement and human (dis)agreement. This probably is due to the labels are created by humans.\\n\\nWe agree that there is a an intuitive aspect that the models are trained on labels created by humans. But we think the intuition is less obvious than at first appearance because all models were trained on the same singular label generated by humans. We see that despite all models being trained to provide the same labeling over the training set, they disagree on held out images in a way that is similar to human population would disagree. Despite no labeling variation during training, the prediction variation acquired by the population of models (as measured through disagreement) is similar to a human population. We will update the draft to make this point more clear.\\n\\n> 42 participants from user study might be a bit bias. May conduct a few more user studies and combine with previous data.\\n\\nSorry for the confusion. There are actually 2,647 human participants from the user study we used. We've updated the draft to make this more clear. There are 42 predictions per image (7 different subjects seeing each image at one of six timings). To eliminate memory recall effects, each subject needs to be viewing the image for the first time.\\n\\n> The image style does not look very good, some images are taking too many spaces but contain relatively few contents.\\n\\nThank you for the suggestion. We have adjusted the figures to be more space efficient by making better use of the aspect ratio with more information presented in each row.\\n\\n> In Fig 1, it is a bit surprising that there are very few images with high human agreement from the top histogram, which means humans rarely have full agreement on images. Could you explain possible reasons behind this?\\n\\nObjectNet was collected to be a challenging dataset to models that perform well on ImageNet. Given the large range of viewing times (from 17ms to 10s), humans also have difficulty in 100% correctly predicting the images correctly across all viewing times. Interestingly, it is actually the the shorter viewing times where humans tend to agree more than longer ones.\\n\\n> If humans and AI cannot recognize the difficult images or the edge-case images, it means vision alone cannot solve the problem and we probably do not have a better solution using only vision. What other benefits could it bring to us if we study more on the difficult images? In other words, how does studying the edge-case images help?\\n\\nWe view edge cases something akin to optical illusions for visual intelligence. Much like the famous blue dress image (https://en.wikipedia.org/wiki/The_dress), edge cases that lead to disagreement could reveal compelling discrepancies or similarities between populations of visual intelligences.\"}", "{\"comment\": \"> For a study comparing human and artificial visual systems, the authors might want to consider the body of literature that draws from neuroscience to better understand how convolutional neural networks (CNNs) could model early visual processing pathways\\n\\nActually there's a growing body of work that shows that architecture does not play a significant role in modeling visual processing pathways.\\n\\n\\\"We find that model scale and architecture have essentially no effect on the alignment with human behavioral responses, whereas the training dataset and objective function both have a much larger impact.\\\" \\\\cite{muttenthaler2022human, conwell2024large}\\n\\nMore recent work has found that many different architectures make very similar predictions \\\\cite{conwell2024large} and discrimination along the dimension of architecture are not viable \\\\cite{han2023system}. This was the original motivation for finding the 'disagreeable' stimuli, the variability of predictions amongst different models would help distinguish unique characteristics of potentially related to architecture. But to our surprise, as described in the introduction, the prediction variability among the model group was similar to the variability of human group.\\n\\n@article{muttenthaler2022human,\\n title={Human alignment of neural network representations},\\n author={Muttenthaler, Lukas and Dippel, Jonas and Linhardt, Lorenz and Vandermeulen, Robert A and Kornblith, Simon},\\n journal={arXiv preprint arXiv:2211.01201},\\n year={2022}\\n}\\n\\n@article{conwell2024large,\\n title={A large-scale examination of inductive biases shaping high-level visual representation in brains and machines},\\n author={Conwell, Colin and Prince, Jacob S and Kay, Kendrick N and Alvarez, George A and Konkle, Talia},\\n journal={Nature Communications},\\n volume={15},\\n number={1},\\n pages={9383},\\n year={2024},\\n publisher={Nature Publishing Group UK London}\\n}\\n\\n@inproceedings{han2023system,\\n title={System identification of neural systems: If we got it right, would we know?},\\n author={Han, Yena and Poggio, Tomaso A and Cheung, Brian},\\n booktitle={International Conference on Machine Learning},\\n pages={12430--12444},\\n year={2023},\\n organization={PMLR}\\n}\"}" ] }
4cQVUNpPkt
FOLEYCRAFTER: BRING SILENT VIDEOS TO LIFE WITH LIFELIKE AND SYNCHRONIZED SOUNDS
[ "Yiming Zhang", "Yicheng Gu", "Yanhong Zeng", "Zhening Xing", "Yuancheng Wang", "Zhizheng Wu", "Kai Chen" ]
We study Neural Foley, the automatic generation of high-quality sound effects synchronizing with videos, enabling an immersive audio-visual experience. Despite its wide range of applications, existing approaches encounter limitations when it comes to simultaneously synthesizing high-quality and video-aligned (i.e.,semantic relevant and temporal synchronized) sounds. To overcome these limitations, we propose FoleyCrafter, a novel framework that leverages a pretrained text-to-audio model to ensure high-quality audio generation. FoleyCrafter comprises two key components: a semantic adapter for semantic alignment and a temporal adapter for precise audio-video synchronization. The semantic adapter utilizes parallel cross-attention layers to condition audio generation on video features, producing realistic sound effects that are semantically relevant to the visual content. Meanwhile, the temporal adapter estimates time-varying signals from the videos and subsequently synchronizes audio generation with those estimates, leading to enhanced temporal alignment between audio and video. One notable advantage of FoleyCrafter is its compatibility with text prompts, enabling the use of text descriptions to achieve controllable and diverse video-to-audio generation according to user intents. We conduct extensive quantitative and qualitative experiments on standard benchmarks to verify the effectiveness of FoleyCrafter. Models and codes will be available.
[ "Diffusion Model", "Audio Generation", "Video to Audio Generation" ]
Reject
https://openreview.net/pdf?id=4cQVUNpPkt
https://openreview.net/forum?id=4cQVUNpPkt
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yPmDhezQaB", "tbSzHbmV4B", "qVxsivDist", "neg8BhyVrD", "b0OQv0UP6i", "RTERJ9Neo4", "ROfBaYlZfr", "NLqnyDSAhW", "M9xKvs9qZc", "JR5VnoJWwV", "IvbQ5BQoUi", "HHQWLZWnve", "DNFGJRFG92", "DGq6of7TqT", "BJexSORFGH", "7ScQtLtJli", "5rwTmPlX8j" ], "note_type": [ "decision", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment" ], "note_created": [ 1737523747494, 1732531954041, 1730275563474, 1732416694061, 1732343111094, 1732087485154, 1732531846445, 1732087494924, 1732087477972, 1730690268402, 1734748191191, 1732503189863, 1732783390701, 1732305713424, 1732678114374, 1730305559296, 1732087474771 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission6154/Authors" ], [ "ICLR.cc/2025/Conference/Submission6154/Reviewer_p9QH" ], [ "ICLR.cc/2025/Conference/Submission6154/Reviewer_p9QH" ], [ "ICLR.cc/2025/Conference/Submission6154/Reviewer_Bd3G" ], [ "ICLR.cc/2025/Conference/Submission6154/Authors" ], [ "ICLR.cc/2025/Conference/Submission6154/Authors" ], [ "ICLR.cc/2025/Conference/Submission6154/Authors" ], [ "ICLR.cc/2025/Conference/Submission6154/Authors" ], [ "ICLR.cc/2025/Conference/Submission6154/Reviewer_Bd3G" ], [ "ICLR.cc/2025/Conference/Submission6154/Area_Chair_MAV8" ], [ "ICLR.cc/2025/Conference/Submission6154/Authors" ], [ "ICLR.cc/2025/Conference/Submission6154/Authors" ], [ "ICLR.cc/2025/Conference/Submission6154/Authors" ], [ "ICLR.cc/2025/Conference/Submission6154/Authors" ], [ "ICLR.cc/2025/Conference/Submission6154/Reviewer_Qwiq" ], [ "ICLR.cc/2025/Conference/Submission6154/Authors" ] ], "structured_content_str": [ "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Response to Reviewer p9QH (Part 2/2)\", \"comment\": \"**[About Potential Failure Cases]**\\n\\n- We have indeed discussed potential failure cases in the limitations section (L716-L722) of the appendix. Specifically, when the visual scene becomes highly complex or the video is exceptionally long, synchronization accuracy can be constrained by the performance of the audio signal estimation and the quality of the training data. \\n\\n- Based on your suggestions, we have also visualized and included some failure cases in the revised version. Please refer to the uploaded revision for our visualization analysis. To address these challenges in temporal alignment, we plan to focus on constructing highly visual-audio-aligned datasets and advancing model design in future work.\\n\\n\\n**[About Details of Subjective Evaluations]**\\n\\nThank you for pointing out this issue. In addition to the details provided in Section B.3 of the appendix, we will incorporate the following clarifications in the final version of the paper, as per your suggestions.\\n\\nWe conducted a user study involving **20 participants** who rated 40 randomly selected video-audio samples, following V2A-Mapper. Specifically, the participants were either practitioners in audio generation and multimedia or PhD students specializing in Artificial Intelligence. To ensure unbiased feedback, all results were presented to the participants anonymously.\\n\\nEvaluating audio quality, temporal alignment, and semantic alignment across samples from different models requires significant focus from participants. To reduce cognitive load and avoid random ratings, we opted for **pairwise comparisons** instead of Mean Opinion Scores or Meaningful Difference Scores, as used in V2A-Mapper. As shown in Figure 3 of the appendix, pairwise comparisons simplify the evaluation process by asking participants to compare two results at a time, which improves the reliability of their judgments and ensures higher utilization of user study votes. Such a pairwise comparison design is also widely used in the fields of large language models (LLMs) [1,2] and computer vision [3,4]. Specifically, in each trial, participants were presented with two results: one generated by FoleyCrafter and the other by a randomly selected baseline. \\n\\nTo further ensure the quality and consistency of the evaluations, we designed the study to present the same questions multiple times throughout the process. This repetition helped us verify participants\\u2019 attentiveness and identify any inconsistencies in their responses. Inconsistent scores for the same question from the same participant were treated as unreliable, allowing us to maintain the integrity of the results.\\n\\n[1] Liu, Yinhong, et al. \\\"Aligning with human judgement: The role of pairwise preference in large language model evaluators.\\\" arXiv preprint arXiv:2403.16950 (2024).\\n\\n[2] Liusie, Adian, et al. \\\"Efficient LLM Comparative Assessment: a Product of Experts Framework for Pairwise Comparisons.\\\"\\u00a0arXiv preprint arXiv:2405.05894\\u00a0(2024). \\n\\n[3] Li, Shufan, et al. \\\"Aligning diffusion models by optimizing human utility.\\\" arXiv preprint arXiv:2404.04465 (2024).\\n\\n[4] Zeng, Yanhong, et al. \\\"Aggregated contextual transformations for high-resolution image inpainting.\\\" IEEE Transactions on Visualization and Computer Graphics 29.7 (2022): 3266-3280.\"}", "{\"summary\": \"In this paper, the authors proposed a framework called FoleyCrafter to synthesize high-qulity audio with text prompt, which contains two key components as follows:\\n1.Semantic adapter condition generated audio conditioned on video features, rendering more semantically relevance.\\n2.Temporal adapter estimates time signals, synchronizing with audio.\\nThe authors carried experiments on two datasets and achieved better performance compared with current powerful models.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"1.Originality: The authors proposed two adapters to improve the audio synthesis. However, the structure inside originates from other works.\\n2.Quality: Although the method proposed is effective compared to others, it lacks rigorous mathematical proof.\\n3.Clarity: Semantic adapter has not been clarified clearly, especially the cross-attention component.\\n4.Significance: The significance of the method is relatively high comparing to existing methods. However, parameters to be trained is relatively high compared to others.\", \"weaknesses\": \"1.Lack of Innovation: In this article, there are two key components. However, the semantic adapter is derived from the IP-adapter[1], while the temporal adapter originates from ControlNet[2]. This article lacks substantial original contributions.\\n2.Inference Latency Concerns: In the articles mentioned above, the authors only add a single adapter to the original model. However, in this article, the proposed method includes two separate adapters, which may result in higher inference latency, potentially impeding efficiency and scalability.\\n3.Insufficient Analysis of Text Prompts: In this article, there are text prompts and video prompts for audio generation. However, The authors provide only a qualitative description of the text prompt's capabilities, without comparing it to other models.\\n\\n[1] Hu Ye, Jun Zhang, Sibo Liu, Xiao Han, and Wei Yang. Ip-adapter: Text compatible image prompt adapter for text-to-image diffusion models. arXiv preprint arXiv:2308.06721, 2023.\\n[2] Lvmin Zhang, Anyi Rao, and Maneesh Agrawala. Adding conditional control to text-to-image diffusion models. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 3836\\u20133847, 2023a.\", \"questions\": \"No\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"No\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"I appreciate the author\\u2019s responses to my concerns. However, I would like to share some remaining concerns that have not yet been fully resolved.\\n\\n**[About Novelty]** \\nThe contributions of this work are primarily based on the effective application of existing architectures. While the authors have clarified that the Semantic Adapter and Temporal Adapter represent new applications of existing methods, the work lacks further theoretical analysis or novel architectural design that aligns with the quality of ICLR in the context of video-to-audio generation.\\n\\n**[About Inference latency and trainable parameters]** \\nCould you clarify why Diff-Foley achieves lower inference latency despite having twice the number of parameters? Is it due to the proposed two separate adapters? Additionally, some experimental details are missing. For instance, do FoleyCrafter and baselines employ acceleration techniques for attention, such as flash-attention? What is the exact inference time for each submodule of FoleyCrafter? Addressing these points could help make the results more compelling.\\n\\n**[About Analysis of text prompts]** \\nThank you for providing the experimental results. The response from the authors has addressed my questions.\\n\\n\\n**[About Potential Failure Cases]** \\nAlthough Figure 6 shows the effectiveness of the proposed method, the paper does not adequately discuss potential failure cases in video to audio generation. It would be beneficial for the authors to address scenarios where the model might underperform, such as wrong temporal information predicted by the temporal estimator that lead to failures, difficulties with extremely long video inputs, or challenges with some special video contents. Understanding these limitations would provide a clearer understanding of the proposed model.\\n\\n**[About Details of Subjective Evaluations]** \\n\\u00a0The details of the subjective evaluations are missing, i.e., what the demographics are like for the raters in subjective evaluations, whether there are attention checkers, how the results are quality-checked etc. Besides, there is no information about the compensation, or criteria for hiring human subjects for the subjective evaluation.\"}", "{\"title\": \"Response to Authors\", \"comment\": \"I sincerely appreciate the time and effort the authors spent addressing the reviewers' comments. I feel that all of my concerns have been addressed very effectively. I raised my rating to 8. I recommend the paper.\"}", "{\"comment\": \"We sincerely appreciate your review and addess the major concerns below.\\n\\n**W1. Additional Evaluation in terms of FD and FAD.**\\n\\nWe appreciate the reviewer's suggestion regarding the evaluation metrics. Following this feedback, we conducted comprehensive evaluations using both Fr\\u00e9chet Distance (FD) and Fr\\u00e9chet Audio Distance (FAD) as below. Our results demonstrate that FoleyCrafter consistently outperforms existing methods, achieving state-of-the-art performance across both metrics. Specifically, FoleyCrafter achieves an FD of 20.32 and FAD of 2.69, representing a 19.41% improvement over the strongest baseline.\\n\\n| Method | VGGSound | | AVSync | |\\n|--------------------------|-------------------------------|---------------|--------------------------------|---------------|\\n| | FD\\u2193 | FAD\\u2193 | FD\\u2193 | FAD\\u2193 |\\n| SpecVQGAN | 32.08 | 5.563 | 32.08 | 11.51 |\\n| Diff-Foley | 29.37 | 6.199 | 29.37 | 12.75 |\\n| Seeing-and-Hearing | 33.11 | 4.325 | 33.11 | 11.67 |\\n| SonicVisionLM | 21.06 | 3.338 | 21.06 | 10.24 |\\n| Ours (Timestamp-based) | 20.33 | **2.326** | 20.33 | 8.045 |\\n| Ours (Energy-based) | **20.32** | 2.690 | **20.32** | **7.902** |\\n\\n\\n\\n**Improvement of writing style.**\\n\\nWe appreciate the reviewer's detailed suggestions on improving the presentation. We have standardized all table, figure, and citation references, and expanded Section 2 with more comprehensive related works. All changes can be found in our uploaded revision.\"}", "{\"title\": \"Response to Reviewer p9QH (Part 1/2)\", \"comment\": \"Dear reviewer:\\n\\nThanks for your comments. Here are our replies for your remaining concerns.\\n\\n**[About Novelty]**\\n\\nThank you for acknowledging our new applications of the Semantic Adapter and Temporal Adapter for video-to-audio generation. We argue that the potential of **developing plug-and-play modules based on off-the-shelf T2A (text-to-audio) models for video-to-audio generation is highly underrated**. High-quality text-audio paired datasets are significantly larger and more reliable compared to video-audio paired datasets, which are often low-quality and inconsistent. We believe that training plug-and-play modules for video-to-audio adaptation based on text-to-audio models is a promising direction to achieve high-quality results while maintaining training efficiency, especially compared to training video-to-audio models from scratch (e.g., SpecVQGAN and Diff-Foley).\", \"to_advance_the_development_of_such_a_plug_and_play_mechanism\": \"1. **We take the first step** by designing a Semantic Adapter **to encode video features for direct attention** by audio generation models. This approach avoids relying on text as an intermediate bridge, as seen in prior works (e.g., Seeing and Hearing, SonicVisionLM, and V2A-Mapper), which often leads to suboptimal semantic alignment.\\n\\n2. **We conducted the first extensive study on both timestamp-based and energy-map-based audio signal estimation** to address the spatial misalignment problem between visual and audio modalities when applying ControlNet.\\n\\nWe sincerely hope that these insights and explorations, supported by our extensive experiments and strong results, will inspire future research and drive progress in the video-to-audio community.\\n\\n\\n**[About Inference latency and trainable parameters]**\\n\\nThanks for pointing out this issue. We will add the following clarifications to the final version. \\n\\n- **Inference latency of Diff-Foley.** While Diff-Foley has twice the number of **trainable parameters** (859M for the entire model trained from scratch), it achieves lower latency during inference since its **total model size remains 859M**. While FoleyCrafter has **fewer trainable parameters (415M for adapters)**, it has about 1.2B (additional 890M for the frozen UNet) model parameters during inference, leading to slightly higher inference latency. It is important to note that inference latency can also be influenced by various design factors (e.g., network architecture, downsampling factors, feature extraction, etc.). Nonetheless, by comparing the overall inference latency and trainable parameters, we demonstrate that FoleyCrafter achieves competitive inference latency while offering the added advantage of high training efficiency, thanks to its plug-and-play framework.\\n\\n- **Comparison settings**. We have checked and ensured that all the models (including FoleyCrafter) were tested using their native implementations, without employing acceleration techniques such as flash attention, to ensure a fair comparison.\\n\\n- **Exact inference time for each submodule**. The exact inference time for each submodule of FoleyCrafter is as follows: **Semantic Adapter** (**0.58** seconds for visual encoding), **Temporal Adapter** (**0.12** seconds for audio signal estimation), **UNet inference** (**1.59** seconds for complete diffusion sampling), and **IO** (**0.84** seconds for reading video frames), resulting in **a total inference time of 3.1 seconds**. We recognize that FoleyCrafter has potential for further optimization in terms of inference latency, which we plan to address in future work.\"}", "{\"comment\": \"Thank all reviewers for your time and invaluable feedback.\\n\\nWe appreciate the recognition of FoleyCrafter as a **well-designed model** (Bd3G) with **unique and original thinking** (Qwiq) and the acknowledgement of the **comprehensiveness of our experimental results** (Bd3G, Qwiq) which shows it a **effective method** for video-to-audio generation (p9QH).\\n\\nWe address the questions and concerns for each reviewer in the rebuttal sessions individually. As mentioned in our responses, we will incorporate clarifications in the final version of our paper. Please let us know if there are any questions or further clarifications/discussions.\"}", "{\"comment\": \"We sincerely appreciate your positive feedback regarding our 'unique and original thinking,' 'high research quality with extensive experiments,' 'well-structured' presentation, and 'innovations with substantial application potential.' We address the remaining concerns below.\\n\\n**W1. Novalty in leveraging T2A models for V2A generation.**\\n\\nFoleyCrafter is the first to employ plug-and-play modules to achieve high-quality, video-aligned audio generation **directly from visual content**.\\nIt investigates the off-the-shelf high-quality pre-trained audio generator for video-to-audio (V2A) generation. The pre-trained model exhibits a robust text-to-audio generative capability that we find advantageous for V2A tasks. By utilizing this powerful audio generator, FoleyCafter can produce more realistic and higher-quality audio compared to existing video-to-audio models. \\n\\nHowever, generating video-aligned audio with such a well-trained audio generator still remains a challenge. To address this, we propose the semantic adapter and temporal adapter to enable visual relevant and synchronized audio generation. In summary, we investigated how to leverage the existing T2A model and introduced new modules to adapt it for V2A tasks.\\n\\n**W2. About the generation speed.**\\n\\nFoleyCrafter has a fast generation speed. \\nWe evaluate the inference time of existing works using the same video frame rate and on the same computing device, and report the results below. \\n\\n| Method | Inference Time | Trainable Parameters |\\n|--------------------|----------------|----------------------|\\n| SpecVQGAN (ResNet) | 4.8s | 379M |\\n| Diff-Foley | 2.7s | 859M |\\n| Seeing-and-Hearing | 22.02s | - |\\n| SonicVisionLM | 3.7s | 364M |\\n| Ours | 3.1s | 415M |\\n\\nFor implementation, both the semantic adapter and temporal adapter can be parallelized with the UNet during the sampling process, resulting in fast inference times.\\n\\n**W3. Ablation of Semantic Adapter and Temporal Adapter.**\\n\\nWe indeed include ablation experiments for semantic and temporal adapters in Table 3 and Table 4 of the main paper. \\n\\nWe report audio quality and audio-visual relevance with and without the semantic adapter in Table 3 L483-L484. The improvement in MKL and CLIP scores indicates that the semantic adapter effectively integrates detailed visual embeddings for audio generation. Additionally, the lower FID demonstrates that the semantic adapter enhances the utilization of the well-trained audio generator for high-quality video-to-audio generation.\\n\\nFor the temporal adapter, we assess the temporal alignment of audio samples generated with and without it in Table 4 L457-L460. The significant improvement (10.6%) of Onset AP, AV-Align and Energy MAE shows temporal adapter improved the synchronization of FoleyCrafter.\\n\\n**W4. Ablation of \\u03bb in Cross-Attention.**\\n\\nAs described in L240-L269, the semantic adapter utilizes the parallel cross-attention with a variable parameter \\u03bb. Here we show the ablation results of different \\u03bb. The results indicate that as \\u03bb decreases, the MKL, CLIP score, and FID all decline, demonstrating that the semantic adapter is crucial for audio-visual alignment and high-quality generation.\\n\\n| \\u03bb | MKL\\u2193 | CLIP\\u2191 | FID\\u2193 |\\n|------------------------|------------|------------|------------|\\n| \\u03bb = 0 | 6.212 | 3.542 | 103.1 |\\n| \\u03bb = 0.4 | 2.376 | 8.428 | 61.56 |\\n| \\u03bb = 0.8 | 1.857 | 9.856 | 49.71 |\\n| \\u03bb = 1.0 (FoleyCrafter) | **1.719** | **11.37** | **42.40** |\"}", "{\"summary\": \"This paper presents a new video-to-audio model, featured by the semantic adapter and temporal adapter. The proposed model uses the [Auffusion](https://arxiv.org/abs/2401.01044) model as a baseline, and not only video-audio paired data but also text-audio paired data are used for training its sub-modules for connecting between the visual encoder and Auffusion. The temporal adapter, trained with the BCE loss or MSE loss to estimate the energy map of audio from video, enhances the synchronization between video and audio. The authors conducted both quantitative and qualitative comparisons with previous video-to-audio models to demonstrate that the proposed model outperforms them. They also conducted ablation studies to show that their proposed semantic and temporal adapters are effective.\", \"soundness\": \"2\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"1. Although there is room for improvement in writing style, the paper itself is well-written enough to make readers understand their motivation, the proposed method, and the experimental results.\\n2. The proposed video-to-audio model is well-designed to address the issue of synchronization between video and audio. There may be other designs for resolving the issue, but they conducted ablation studies to demonstrate that their designed model works well.\\n3. The authors quantitatively evaluated their model on the commonly used benchmarks and qualitatively analyzed the audio signals generated from the proposed and previous models for comparison. These experimental results show that the proposed model outperforms the previous models.\", \"weaknesses\": [\"L.365: \\\"We employed several evaluation metrics to assess semantic alignment and audio quality, namely Mean KL Divergence (MKL) (Iashin and Rahtu, 2021), CLIP similarity, and Frechet Distance (FID) (Heusel et al., 2017), following the methodology of previous studies (Luo et al., 2023; Wang et al., 2024; Xing et al., 2024). MKL measures paired sample-level similarity\\\"\", \"The application of FID to audio quality evaluations is proposed by [Iashin and Rahtu (2021)](https://www.bmvc2021-virtualconference.com/conference/papers/paper_1213.html), and [Luo et al. (2023)](https://proceedings.neurips.cc/paper_files/paper/2023/hash/98c50f47a37f63477c01558600dd225a-Abstract-Conference.html) followed them. However, [Wang et al. (2024)](https://ojs.aaai.org/index.php/AAAI/article/view/29475) and [Xing et al. (2024)](https://openaccess.thecvf.com/content/CVPR2024/html/Xing_Seeing_and_Hearing_Open-domain_Visual-Audio_Generation_with_Diffusion_Latent_Aligners_CVPR_2024_paper.html) use different metrics, FD ([Liu et al., 2023](https://proceedings.mlr.press/v202/liu23f.html)) and FAD ([Kilgour et al., 2019](https://www.isca-archive.org/interspeech_2019/kilgour19_interspeech.html)). I recommend the authors additionally evaluate their proposed model with these metrics for several reasons. The FID and FAD are calculated from spectrograms and do not consider phase information of audio signals. The FD is based on the PANN network ([Kong et al., 2020](https://ieeexplore.ieee.org/document/9229505)), which takes audio waveforms and achieves better performance in classification tasks than VGGish. Plus, recent papers use FAD or FD more frequently. The evaluation on these metrics will be informative to readers, which means the authors can contribute more to the community.\"], \"questions\": \"I would appreciate the authors' response to my comments in \\\"Weaknesses\\\".\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"The paper presents a framework: FoleyCrafter for adding foley sound effects to videos. The key innovations are: i) to enhance an audio latent diffusion model that is conditioned on text to also include video semantics via incorporating video features using cross-attention layers, and ii) temporal synchronization of the audio with the time-varying video content, using either manually-provided time-stamps or using an energy map estimated from mel spectrograms, followed by using ControlNet for synchronization. Experiments on VGGSound and AVSync15 show promising results.\", \"additional_comments_on_reviewer_discussion\": \"The paper is well-written and easy to follow, however received mixed reviews. While the reviewers appreciated the empirical benefits showcased and the substantial potential of the model towards enabling varied applications in video-to-audio generation, there were many concerns brought up, which were debated with the authors during the author-reviewer discussion phase. Mainly, there were three important concerns raised by the reviewers:\\n1. Lack of originality in the contributions (Qwiq, p9QH)\\n2. Many ablations studies, performance comparisons, and evaluation metrics being missed (Qwiq, p9QH, Bd3G)\\n3. Latency in the audio-generation against prior methods. \\n\\nDuring the discussion phase, authors provided additional numerical results addressing points 2 and 3. Specifically, authors pointed out the ablation studies that were already in the paper, comparisons to w/ and w/o text-prompts were provided, inference latency comparisons were provided that showed the latency to be in the same ballpark as prior methods, and comparisons on two additional metrics FD and FAD (requested by Bd3G) were added.\\n\\nHowever, the concern regarding novelty remains. As noted by Reviewer p9QH, the contributions of this paper appear incremental to IP-adapter [1] and ControlNet [2]. While, the authors pointed out during the discussion that the paper proposes to use different modalities (video features) as part of their semantic alignment module (as against text features in [1]), AC agrees with the reviewer that there is a lack of any significant novelty made in this incorporation. Further, while the paper uses timestamp information and energy-map of mel spectrograms for estimating the time-varying audio signals, these appear more of heuristic choices. As such, AC agrees with the Reviewers p9QH and Qwiq that, despite the strong empirical performances, unfortunately the paper is short of significant scientific contributions or insights in comparison to prior art, and thus recommends reject.\"}", "{\"comment\": \"Dear reviewer,\\n\\nWe have provided comprehensive responses to each concern. Please let us know if you have any additional questions that we can address during the discussion period. We hope that you can consider the raising score after we address all the issues. \\n\\nIf you still have questions and concerns, please feel free to comment here. We will reply it as soon as possible.\\n\\nThank you!\"}", "{\"title\": \"We are looking forward to your feedback.\", \"comment\": \"Dear Reviewer p9QH,\\n\\nAs the discussion deadline approaches, we would like to know whether we have addressed your remaining concerns. \\n\\nWe highly value your feedback and have made additional clarifications and necessary revisions in the manuscript, highlighted in $\\\\color{red}{red}$. Your time and feedback are greatly appreciated, and we look forward to your response.\"}", "{\"title\": \"Please let us know whether we address all the issues\", \"comment\": \"Dear reviewer,\\n\\nWe have submitted the response to your comments. Please let us know if you have additional questions so that we can address them during the discussion period. We hope that you can consider the raising score after we address all the issues.\\n\\nIf you still have questions and concerns, please feel free to comment here. We will reply it as soon as possible.\\n\\nThank you!\"}", "{\"title\": \"Further Concerns or Questions?\", \"comment\": \"We have provided additional details and clarifications in response to your remaining concerns, and we would like to know if there are any further questions or issues we can address. We are committed to engaging fully and will reply as promptly as possible.\\n\\nThank you once again for your insightful and constructive feedback, which we believe has been instrumental in strengthening our work. We sincerely hope the improvements we have made address your key concerns, highlight the contributions of our paper, and contribute positively to the development of this community. We kindly ask you to reconsider your evaluation in light of these changes, and we remain happy to further address any additional concerns you may have.\"}", "{\"summary\": \"This paper introduces FoleyCrafter, a framework designed for automatically generating realistic and synchronized sound effects for silent videos. FoleyCrafter leverages a pre-trained text-to-audio model, incorporating a \\u201csemantic adapter\\u201d and \\u201ctemporal adapter\\u201d to ensure that the generated audio is semantically aligned with video content and precisely synchronized over time. Additionally, it supports customizable audio generation through text prompts. The primary contributions include: 1) presenting a novel neural Foley framework for high-quality, video-aligned sound generation, 2) designing semantic and temporal adapters to improve audio-video alignment, and 3) achieving state-of-the-art performance on benchmarks through comprehensive quantitative and qualitative evaluations.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1.\\tOriginality: This paper introduces an innovative framework, FoleyCrafter, which stands out in the field of sound generation for silent videos. By combining a pre-trained text-to-audio model with novel adapter designs (semantic and temporal adapters), it effectively addresses the limitations of existing methods in terms of audio quality and video synchronization, showcasing unique and original thinking.\\n2.\\tQuality: The paper demonstrates high research quality through comprehensive experimental design and implementation. It includes extensive quantitative and qualitative experiments, validating the effectiveness of FoleyCrafter on standard benchmark datasets. The results show that this method surpasses several state-of-the-art approaches in both audio quality and synchronization performance. Additionally, the availability of code and models facilitates future replication and research.\\n3.\\tClarity: The paper is well-structured, with clear explanations of concepts and model design, allowing readers to easily understand how FoleyCrafter operates. The figures and results in the experimental section are also well-presented, enabling readers to intuitively grasp the method\\u2019s performance and advantages.\\n4.\\tSignificance: FoleyCrafter holds substantial application potential in the field of video-to-audio generation. This approach not only enhances the realism and synchronization of sound effects but also offers controllability and diversity through text-based prompts. Such innovations have broad applicability in multimedia production, including film and gaming, and further advance cross-modal generation technology in the audio-visual domain.\", \"weaknesses\": \"The paper\\u2019s originality appears limited. The whole model system exploits many present models, such as Freesound Project and Auffusion.\\nAlthough the part of Quantitative Comparison includes evaluations in terms of semantic alignment, audio quality and temporal synchronization, the comparison of audio generation speed has not been expressed.\\nThe lack of some ablation experiments for Semantic Adapter and Temporal Controller weakens persuasiveness. The Semantic Adapter could be entirely removed to observe the system\\u2019s performance without visual semantic information. The Onset Detector and Timestamp-Based Adapter could be individually removed to investigate their roles in temporal alignment and onset detection. In addition, it would be more persuasive if ablation experiments for Parallel Cross-Attention with different \\u03bb had been done.\", \"questions\": \"Refer to Weakness\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"1\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We sincerely appreciate your review and addess the major concerns below.\\n\\n**W1. Limited innovations** \\n\\nWe respectfully disagree with the assessment of limited innovation. FoleyCrafter introduces several technical breakthroughs: \\n\\n1. Novel End-to-End Approach:\\n - Previous methods either use video-to-text-to-audio pipelines (Xing et al., 2024; Wang et al., 2024; Xie et al., 2024b) compromising video alignment, or train from scratch on noisy datasets (Iashin and Rahtu, 2021; Luo et al., 2023) sacrificing audio quality.\\n - FoleyCrafter is the first to generate high-quality audio directly from video while maintaining strong temporal alignment.\\n2. We attribute the breakthroughs of FoleyCrafter by the following model designs:\\n- Semantic Adapter: While sharing architectural similarities with IP-Adapter, we pioneer its application for cross-modality conditioning from video to audio which differs from image-to-image in IP-Adapter. As discussed in L263-269, we develop a specialized video encoder to capture visual cues for audio generation. Our novel random dropping strategy enables both effective visual guidance and text controllability, providing a new paradigm for video-guided audio synthesis.\\n- Temporal Adapter: Unlike traditional ControlNet which performs spatially aligned feature residual addition from image to image, we address the unique challenge of video-audio temporal alignment through two novel approaches:\\n - Timestamp event mask-based method\\n - Energy map-based control that eliminates the need for video labels, enabling training on larger-scale datasets.\\n\\nThese adapters make FoleyCrafter the first to employ plug-and-play modules to achieve high-quality, video-aligned audio generation **directly from visual content**. \\n\\n3. These innovations deliver substantial improvements over the strongest baselines:\\n- 14.33% improvement in audio quality (FID scores).\\n- 10.6% Better temporal alignment performance across multiple metrics.\\n- Unique flexibility supporting both video-only and video-text inputs.\\nOur work bridges the gap between high-quality audio synthesis and precise video alignment, establishing a new paradigm for neural foley sound generation.\\n\\n**W2. Inference latency and trainable parameters.**\\n\\nFoleyCrafter demonstrates superior inference time. We evaluate the 10-second audio generation time for different methods using the same video frame rate on the same device, and present the results below. For implementation, both the semantic adapter and temporal adapter can be parallelized with the UNet during the sampling process, resulting in fast inference times.\\n\\n| Method | Inference Time | Trainable Parameters |\\n|--------------------|----------------|----------------------|\\n| SpecVQGAN (ResNet) | 4.8s | 379M |\\n| Diff-Foley | 2.7s | 859M |\\n| Seeing-and-Hearing | 22.02s | - |\\n| SonicVisionLM | 3.7s | 364M |\\n| Ours | 3.1s | 415M |\\n\\nThe total trainable parameters include two adapters for 415M and temporal estimator for 31M which is a lot smaller than Diff-foley with 859M and comparable with SpecVQGAN and SonicVisionLM with approximately 400M.\\n\\n**W3. Analysis of text prompts.**\\n\\nWe would like to clarify that FoleyCrafter is primarily designed for video-to-audio generation, without requiring text input. While most existing video-to-audio models are limited to visual inputs, our semantic adapter uniquely enables optional text-based control. We demonstrate this additional capability qualitatively in Figure 6 of the main paper.\\n\\nIn response to the reviewer's concern, we conducted additional comparison of text-based video-to-audio generation results with those of other methods. As described in the main paper, here we also use wav2clip (Wu et al., 2022) to calculate the embedding similarity between audio embeddings, text embeddings and visual embeddings. We conduct the evaluation on AVSync15 (Zhang et al., 2024) and use the \\\"The sound of [label]\\\" as prompt. \\n\\nAs shown in the table below, when both text prompts and visual information are provided, FoleyCrafter achieves the best performance on the text clip score, indicating the **best alignment with the prompt**. These results demonstrate FoleyCrafter's **flexible ability to condition generation on both text and video** inputs according to users' intents.\\n\\n| Method | CLIP-Visual | CLIP-Text |\\n|-------------------------------|-------------|-----------|\\n| SpecVQGAN (ResNet) | 6.610 | 17.92 |\\n| Diff-Foley | 10.38 | 17.32 |\\n| Seeing-and-Hearing | 2.098 | 17.45 |\\n| SonicVisionLM | 9.236 | 17.21 |\\n| FoleyCrafter (V2A) | **11.67** | 17.98 |\\n| FoleyCrafter (text-based V2A) | 11.21 | **18.07** |\"}" ] }
4bOCP1GtX4
WenXinGPT: A Multimodal Conversational Model for Enhancing Orthopedic Expert Consultations
[ "Yubo Huang", "Xin Lai", "Zixi Wang", "Jingzehua Xu", "Shuai Zhang" ]
Inspired by the hospital expert consultation model, this paper proposes a conversational medical visual language model for orthopedics, named WenXinGPT (Multi-disciplinary Collaboration). The core concept of this work focuses on aligning medical visual and textual representations to leverage high-quality data for generating expert consultation dialogues across hospital departments. The primary objective is to uncover orthopedic knowledge within medical intelligence models and enhance their reasoning abilities in an interpretable manner without requiring additional training. Our research particularly emphasizes zero-shot scenarios, and the results from experiments on 16 datasets provided by Peking Union Medical College Hospital demonstrate that the proposed WenXinGPT framework excels at mining and utilizing medical expertise within large language models, while also expanding their reasoning capabilities. Based on these findings, we conducted manual evaluations to identify and categorize common errors in our methods, along with ablation studies aimed at understanding the impact of various factors on overall performance.
[ "Multimodal conversational model", "orthopedic expert consultations", "medical visual language model", "zero-shot scenarios", "large language models" ]
https://openreview.net/pdf?id=4bOCP1GtX4
https://openreview.net/forum?id=4bOCP1GtX4
ICLR.cc/2025/Conference
2025
{ "note_id": [ "ZHUn2cD1Fu", "YwEyBY1Gjx", "GUEUvSutaH", "8mZPQ4pTZ4" ], "note_type": [ "comment", "official_review", "official_review", "official_review" ], "note_created": [ 1731629031582, 1730935038102, 1730663644869, 1729276551655 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission5190/Authors" ], [ "ICLR.cc/2025/Conference/Submission5190/Reviewer_47Ub" ], [ "ICLR.cc/2025/Conference/Submission5190/Reviewer_4LpN" ], [ "ICLR.cc/2025/Conference/Submission5190/Reviewer_QMrA" ] ], "structured_content_str": [ "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\", \"comment\": \"The suggestions made by the reviewers have helped us make our article better. We are willing to listen to the reviewers' opinions and make long revisions, which may not be in time for the next round of reviews. Thank you for the reviewers' fair evaluation of our article.\"}", "{\"summary\": \"This paper introduces WenXinGPT, a 7B parameter multimodal language model designed for orthopedic medical consultations in Chinese healthcare settings. The authors present a three-stage training process involving pretraining, domain-specific fine-tuning, and incorporation of multi-disciplinary expert consultations. The model is evaluated on medical data and compared against GPT-3.5 and XrayGPT using ROUGE.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"The authors create a comprehensive dataset covering 16 distinct categories of orthopedic surgery-related data from a medical institution. The dataset includes diverse medical information could be valuable for future research.\", \"weaknesses\": \"The paper claims to be multimodal, stating \\\"WenXinGPT, a multimodal large model specifically designed for Chinese medical image diagnosis\\\" (Introduction), yet provides no technical details about visual processing or multimodal integration architecture.\\n\\nThe evaluation is fundamentally flawed, comparing a supposedly multimodal model primarily against GPT-3.5 (a text-only model) using only text-based ROUGE metrics. Despite citing MiniGPT-4, LLaVA, and mPLUG-Owl in the introduction, these more relevant multimodal baselines are absent from the evaluation.\\n\\nThe paper claims architectural innovations through NAS and GQA but provides no evidence these choices improve upon existing architectures like Llama that already use GQA. Testing is limited to a single institutional dataset, raising questions about generalizability.\", \"questions\": \"1. Can you provide results on other medical datasets to demonstrate generalizability?\\n\\n2. Why were modern multimodal models not included as baselines? The current comparison against GPT-3.5 seems inappropriate for evaluating multimodal capabilities.\\n\\n3. The paper mentions using \\\"16 A100 GPUs (32GB)\\\" for training, but A100s only come in 40GB and 80GB variants. Could you what models were used?\\n\\n4. What specific advantages does your architecture provide over existing models like Llama 3 that already use GQA?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces WenXinGPT, a multimodal LLM for orthopedic medical diagnoses in Chinese. This paper introduces a new dataset for orthopedic surgery and uses a Multi-Department Consultation framework to develop a comprehensive surgical plan.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. This paper addresses a significant gap in non-English healthcare by introducing a multimodal orthopedic domain language model in Chinese.\\n2. Introduced a novel MC approach that includes feedback from various experts in formalizing the final surgical plan.\\n3. Incorporates multi-round discussion amongst medical professionals from different domains, thus aligning it closely with real-world medical consultations.\\n4. Introduced a new dataset containing detailed categories of orthopedic surgery essential for future research in this domain.\", \"weaknesses\": \"1. Dataset details: Details on dataset size (number of tokens), high level statistical analysis, and dataset composition are lacking, including the specific datasets used and the proportions allocated for pretraining and fine-tuning (SFT).\\n2. Evaluation Metrics: Evaluation relies solely on ROUGE scores, which is insufficient to capture essential aspects of medical report quality, such as interpretability and usability. Comparisons are limited to two other LLMs; additional comparisons to advanced models like Opus or GPT-4 would better contextualize the results. Results from GPT-based assessments also need to be included. The work will significantly benefit from human evaluations.\\n3. Ablation Studies: The study needs an analysis of how NAS and MC strategies impact model performance, making the effectiveness of these approaches unclear.\\n4. Implementation Details: Key implementation details, such as the prompts used for the MC framework and the tasks for supervised fine-tuning (SFT), must be included, impacting reproducibility.\\n5. Multi-Turn Dialogue: The multi-turn interaction mechanism needs to be clearly explained, and the example provided needs to illustrate how multi-turn discussions are initiated or maintained sufficiently. \\n6. Domain Focus and Generalizability: The choice to focus exclusively on orthopedics is not entirely justified, and there is limited discussion on the model\\u2019s adaptability to other medical specialties or non-Chinese datasets.\\n7. Ethical Considerations: Information on handling Protected Health Information (PHI) in the dataset is incomplete, with no clear explanation of the PHI removal or validation techniques.\", \"questions\": \"Comments: This paper represents valuable groundwork for healthcare applications in non-English languages, and I believe it addresses an important and necessary area. However, it currently lacks significant details that require attention.\", \"suggestions\": \"Please address the points outlined in the Weakness section to enhance the paper's contribution. Specifically, expanding on evaluation metrics and experiments, ablation work to showcase the impact of MC, as well as expanding on the multi-turn approach, would greatly strengthen the technical contribution of this paper.\", \"flag_for_ethics_review\": \"['Yes, Privacy, security and safety']\", \"details_of_ethics_concerns\": \"The authors state that all patient data has been desensitized to protect confidentiality and privacy; however, they do not provide further details or evidence to substantiate this claim. Hence, an ethical review is needed.\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces WenXinGPT, which incorporates multiple LLM agents on X-ray images for better clinical diagnosis. The idea of using multi-agents is interesting. However, there are major flaws that I will outline in more detail.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"1. The use of multi-agents. This design makes the generation of diagnosis more like a 'joint expert consultation' process, improving the outputs' robustness and interoperability.\\n2. The generation of the dataset. The dataset mentioned in the paper, if publically available, would be a good platform for future research.\", \"weaknesses\": \"1. The paper is not well written.\\n 1) For example, the authors mentioned in the contributions, 'implement an underlying prompt system'. However, this part is missing in the paper.\\n 2) The dataset is not clearly introduced. How many records are in this dataset? How many participants are in this dataset? \\n 3) How are the D_consultations (mentioned in the Training Pipeline) acquired? And how is the human feedback acquired for RLHF?\\n\\n2. Some of the contents are misleading. For example, the authors mentioned that they use a 'a 7-billion-parameter decoder-only LM', which turns out to be DeciLM-7B developed by others. Did the authors make modifications? Why don't citation the DeciLM-7B at the first time it appears? Did the authors develop GQA, NAS? Or just use the implementation the same as Deci? This needs to be clarified. \\n\\n3. The experimental design is not clear. For quantification evaluation (testing), which portion of data was used? How is the performance of BLEU scores? How are CoT, SC, and few-shot, zero-shot strategies implemented? Why just compare with GPT-3.5 and XtayGPT, instead of other general LLMs and medical LLMs? With Few-shot CoT + SC, the performance is better than WenXinGPT itself. How to further improve the performance of WenXinGPT? How is the 'consensus among the interdisciplinary team' reached in the example case?\", \"questions\": \"I have strong concerns regarding to the three contributions that the authors mentioned:\\n1. The authors mentioned the first contribution is to fill the gap of non-English-speaking healthcare LLMs. If so, why don't we just translate the existing English-based LLMs model to the target language? Will that lead to a decreased performance? \\n2. Does 'multi-round interactive dialogue system' refer to the multi-agents? This should be more like a 'joint expert consultation' process rather than a 'multi-round interactive dialogue'. How is the 'consensus among the interdisciplinary team' reached? \\n3. How is the 'underlying prompt system' involved in this research? This part is missing.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"No.\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}" ] }
4b1cJHn7q5
Enforcing 3D Topological Constraints in Composite Objects via Implicit Functions
[ "Hieu Le", "Jingyi Xu", "Nicolas Talabot", "Jiancheng Yang", "Pascal Fua" ]
Medical applications often require accurate 3D representations of complex organs with multiple parts, such as the heart and spine. Their individual parts must adhere to specific topological constraints to ensure proper functionality. Yet, there are very few mechanisms in the deep learning literature to achieve this goal. This paper introduces a novel approach to enforce topological constraints in 3D object reconstruction using deep implicit signed distance functions. Our method focuses on heart and spine reconstruction but is generalizable to other applications. We propose a sampling-based technique that effectively checks and enforces topological constraints between 3D shapes by evaluating signed distances at randomly sampled points throughout the volume. We demonstrate it by refining 3D segmentations obtained from the nn-UNet architecture.
[ "Topology; 3D Reconstruction; Implicit functions; Composite Objects" ]
https://openreview.net/pdf?id=4b1cJHn7q5
https://openreview.net/forum?id=4b1cJHn7q5
ICLR.cc/2025/Conference
2025
{ "note_id": [ "rGPvDuBz2g", "YnNDPHmVzS", "WbJUiyCoLT", "VbBDUN6ZG3", "AEIXGf1owX" ], "note_type": [ "comment", "official_review", "official_review", "official_review", "official_review" ], "note_created": [ 1732711141636, 1730719457618, 1729292376647, 1730693029613, 1730109178991 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission2594/Authors" ], [ "ICLR.cc/2025/Conference/Submission2594/Reviewer_hb8a" ], [ "ICLR.cc/2025/Conference/Submission2594/Reviewer_9fUj" ], [ "ICLR.cc/2025/Conference/Submission2594/Reviewer_6aXe" ], [ "ICLR.cc/2025/Conference/Submission2594/Reviewer_wEaN" ] ], "structured_content_str": [ "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\", \"comment\": \"We thank the reviewers for their time and feedback.\"}", "{\"summary\": \"The authors propose a method for 3D organ reconstruction with regard to pre-defined topological constraints. The core of the proposed approach is a global Monte Carlo sampling that evaluates the relationship between signed distances for two organs to estimate their relationship. In contrast, previous works only consider local constraints, e.g. non-intersection of different parts, but cannot evaluate the global contract ratio of two sub-organs. The authors evaluate their method on both multi-organ cardiac and spine datasets, but emphasize applicability to other organs.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": [\"S1: The work is interesting and approaches a worthwhile topic in the subdomain of medical image analysis. The authors aptly note that existing multi-organ segmentation methods do not consider topological constraints between different sets of organs. While the surface can be reconstructed to avoid local artefacts, constraints between organs with specific priors cannot be easily specified. Based on this, they propose several loss functions based on surface-aware Monte Carlo sampling that propose a correct behaviour (contact, non-contact, non-intersection) between two shape pairs.\", \"S2: The presentation is compelling and professional; there are no major typos, and the figures are nicely constructed.\", \"S3: The method works well with respect to the baseline nn-Unet, and is particularly impressive with respect to out-of-distribution data. The authors also show that using deep SDFs for this task generates superior overlap estimates than converting the outputs to meshes.\"], \"weaknesses\": \"- W1. From my understanding, the two major contributions are the way of regularizing the multi-organ reconstruction approach loss functions constructed through surface aware Monte Carlo sampling, and using deep SDF as a representation for this task. Numerous loss functions for regularizing with respect to surface contact have been explored over the years [1,2,3]. Some others were designed for 2D but obey the same principles of (lack of) intersection and contact as explored here.\\nMy main concern is that the method evaluation is limited to nn-Unet and fitting the SDFs to each organ individually. There are other losses that have been used to regularize topological consistency of medical organs; the authors should compare to these, as is I don\\u2019t think the experimental aspects of this paper do justice to the previous literature on this topic.\\nWhile the authors mention the closely related method by Gupta et al [1], they discard it in the introduction as it only handles local constraints, and cannot be used to enforce global organ contact priors. However, specifically with respect to my later point (see W2), such an approach may bias segmentations less. \\n- W2. Medical relevance is not explored despite being the primary motivation for this work. Enforcing a certain pre-specified level of contact between organ pairs is certainly useful for healthy patients, but in pathological cases one might specifically seek to find violations or deviations from such an overlap. The authors should at the very least mention how these losses might bias predictions towards reconstructions that mimic healthy organs. The paper would be much stronger and application relevant if this were explored.\\n- W3. In the introduction the authors state (L98-99) that the latent vector of the 3D SDF is used to refine the segmentation outputs of the nnUnet. However, despite showing experiments how this approach is superior, they never actually detail how this is achieved.\\n\\n[1] Gupta, S., Hu, X., Kaan, J., Jin, M., Mpoy, M., Chung, K., ... & Chen, C. (2022, October). Learning topological interactions for multi-class medical image segmentation. In *ECCV.*\\n[2] Ganaye, P. A., Sdika, M., Triggs, B., & Benoit-Cattin, H. (2019). Removing segmentation inconsistencies with semi-supervised non-adjacency constraint. *Medical image analysis*, *58*, 101551.\\n[3] Reddy, Charan, Karthik Gopinath, and Herve Lombaert. \\\"Brain tumor segmentation using topological loss in convolutional networks.\\\" (2019).\", \"questions\": [\"Please expand and compare the used losses with previous losses used for topologically aware segmentation in the medical imaging (and potentially other) literature. Detail why and how the specific losses proposed here are unique, and particularly effective for the task at hand. These claims should be backed up experimentally.\", \"The authors should comment on the potential bias such a prior induces upon outputs in pathological cases. Ideally, this would also be backed up experimentally. Could this prior be determined on a patient-specific basis, or based on other factors besides a specific pre-defined overlap?\", \"Please clarify the refinement of the nn-Unet segmentations using the latent SDF representation, as this part is not detailed clearly in the paper.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": [\"## Motivation\", \"Deep implicit functions have emerged as a powerful solution for representing 3D shapes.\", \"However, most of the focus has been put on single-object scenarios, ignoring topological constraints (contact enforcement, non-interpenetration, etc.) that may arise in multi-object applications, such as anatomical modeling.\", \"## Contributions\", \"The authors extends existing neural SDF solutions [Park et al., 2019] to enforce non-interpenetration between different object categories (i.e., different anatomical entities), as well as to enforce user-defined surface contact ratio or surface distance.\", \"This is achieved through the introduction of attraction-repulsion losses applied to a subset of 3D points meeting the contact constraints.\", \"## Results\", \"The authors demonstrate their solution on two clinical use-cases: 3D whole-heart reconstruction (enforcing user-defined surface contact ratio between hear components) and lumbar spine reconstruction (enforcing user-defined minimal distance between vertabrae).\", \"They compare to the original segmentation results (nn-Unet [Isensee et al., 2018]) as well as baseline DeepSDF [Park et al., 2019a], showing that their method succeeds in enforcing non-interpenetration and the user-defined constraints.\", \"An ablation study, as well as well-presented qualitative results, are also provided.\"], \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"_(somewhat ordered from most to least important)_\\n\\n## S1. Clear Illustrations\\n- The authors provide well-designed illustrations to convey their intuition/contributions (Fig. 2), as well as to share their qualitative results (e.g., by highlighting contact vs. interpenetration regions in Fig. 3).\\n\\n## S2. Motivation & Relevance\\n- The implicit modeling of multi-component scenes is an under-explored topic. Most of the research in that direction focuses on human/object interaction scenarios, but the resulting solutions do not always transfer well to anatomical use-cases (e.g., due to rigidity assumptions).\\n- Moreover, the authors' idea to condition the contact/distance losses based on medical prior is interesting and well-motivated.\\n\\n## S3. Decent Reproducibility\\n- Even though the authors did not release their code, an expert in the art should be able to re-implement their work, i.e., extending the publicly-available DeepSDF implementation with the proposed multi-object losses.\", \"weaknesses\": \"_(somewhat ordered from most to least important)_\\n\\n## W1. Lack of Relevant SOTA Comparison\\n\\n### W1.a. No Mention of Existing Multi-Organ DIF Works\\n\\n[L154-157] The authors claim that:\\n\\n> We focus on two different kinds of constraints\\u2014**neither of which has been\\nconsidered in previous work**\\u2014in two distinct scenarios. First, when reconstructing the four chambers\\nof the human heart, these chambers **should never intersect but instead should be in contact with\\neach other** over a given percentage of their surface areas. [...]\\n\\nHowever, their novelty claim is heavily questionable. Even when focusing only on the narrow domain of implicit anatomical modeling, at least two papers [a, b] have already proposed contact and/or non-interpenetration losses. Similar losses have been proposed for other applications, e.g., human/object interaction modeling [c]. The fact that the authors neither compare to\\u2014nor even discuss\\u2014such prior art is problematic.\\n\\n### W1.b. Comparison to Baseline Only\\nSimilarly, the authors only compare their method to a single other deep implicit function method, DeepSDF [Park et al., 2019]. This work is quite outdated and focuses on single-object scenarios. It is obvious that it would under-performed the proposed solution w.r.t. contact/interpenetration metrics. It would have been meaningful to compare the proposed method to (a) more recent implicit solutions targeting multi-object scenarios [a,b,c] ; or at least to (b) DeepSDF applied to modeling the entire scene (as one single multi-part object) rather than to multiple DeepSDF instances applied to each component.\\n\\n## W2. Superficial Contributions Compared to SOTA\\n\\n### W2.a. Attraction-Repulsion Losses Already Applied to Anatomical Modeling\\n\\nWith the above-mentioned prior art in mind, the contributions claimed in this paper appear rather shallow. Their only claims are the losses ensuring non-interpenetration, as well as enforcing surface contact or surface distance (depending on the scenario). While the idea to condition the contact/distance losses on user-defined values is novel, similar contact/repulsion functions already exist in the literature [a,b,c]. Due to the lack of comparison, it is also unclear how their formulation of the contact/inter-penetration losses fair compared to existing solutions.\\n\\n### W2.b Redundant Definition (?)\\n\\nThe self-intersection loss $\\\\mathcal{L}\\\\_{\\\\text{intersecting}}$ and contact-ratio loss $\\\\mathcal{L}\\\\_{\\\\text{contact}}$ proposed in this paper appears somewhat redundant, as well as highly similar to the loss $\\\\mathcal{L}^\\\\mathcal{C}$ proposed in [b], where it is defined as an \\\"attraction-repulsion\\\" function to ensure both non-interpenetration and contact of surfaces.\\n\\nSimilar to the current submission, the loss in [b] relies on the sampling of contact points (set $\\\\mathcal{C}$ in [b]), generalized to any number of surfaces (not just 2). The only contribution of the present paper is the weighting of the set size by the target user-provided contact ratio (a minor change, in my opinion).\\n\\nIndeed, if we define:\\n\\n$\\\\mathcal{A}\\\\_{\\\\text{contact}} = \\\\mathcal{A}\\\\_{\\\\text{intersecting}} \\\\cup \\\\mathcal{A}\\\\_{\\\\text{outside}} \\\\cup \\\\mathcal{A}\\\\_{\\\\text{single}}$, \\n\\nwith $\\\\mathcal{A}\\\\_{\\\\text{outside}}$ set of close points outside all objects and $\\\\mathcal{A}\\\\_{\\\\text{single}}$ set of points inside a single object, then:\\n\\n$\\\\mathcal{L}\\\\_{\\\\text{contact}} = \\\\sum\\\\_{x \\\\in \\\\mathcal{A}\\\\_{\\\\text{contact}}} |\\\\sum\\\\_{i \\\\in [a, b]} f(i, x)| $\\n$ = \\\\sum\\\\_{x \\\\in \\\\mathcal{A}\\\\_{\\\\text{intersecting}}} |\\\\sum\\\\_{i \\\\in [a, b]} f(i, x)| + \\\\sum\\\\_{x \\\\in \\\\mathcal{A}\\\\_{\\\\text{outside}}} |\\\\sum\\\\_{i \\\\in [a, b]} f(i, x)| + \\\\sum\\\\_{x \\\\in \\\\mathcal{A}\\\\_{\\\\text{single}}} |\\\\sum\\\\_{i \\\\in [a, b]} f(i, x)|$\\n$ = \\\\sum\\\\_{x \\\\in \\\\mathcal{A}\\\\_{\\\\text{intersecting}}} \\\\sum\\\\_{i \\\\in [a, b]} |f(i, x)| + \\\\sum\\\\_{x \\\\in \\\\mathcal{A}\\\\_{\\\\text{outside}}} \\\\sum\\\\_{i \\\\in [a, b]} |f(i, x)| + \\\\sum\\\\_{x \\\\in \\\\mathcal{A}\\\\_{\\\\text{single}}} |\\\\sum\\\\_{i \\\\in [a, b]} f(i, x)|$\\n$ = \\\\mathcal{L}\\\\_{\\\\text{intersecting}} + \\\\sum\\\\_{x \\\\in \\\\mathcal{A}\\\\_{\\\\text{outside}}} \\\\sum\\\\_{i \\\\in [a, b]} |f(i, x)| + \\\\sum\\\\_{x \\\\in \\\\mathcal{A}\\\\_{\\\\text{single}}} |\\\\sum\\\\_{i \\\\in [a, b]} f(i, x)|$,\\n\\nc.f. $| x + y | = | x | + | y |$ if $\\\\text{sign}(x) = \\\\text{sign}(y)$\\n\\nHence $ \\\\mathcal{L}\\\\_{\\\\text{intersecting}}$ being redundant to $\\\\mathcal{L}\\\\_{\\\\text{contact}}$.\\n\\nMoreover, based on the above equation, we can also observe that:\\n\\n$\\\\mathcal{L}\\\\_{\\\\text{contact}} \\\\approx \\\\mathcal{L}^\\\\mathcal{C} + \\\\Delta\\\\mathcal{L}$,\\n\\nwith the main difference (if we ignore the sigmoid-based normalization added to the loss $\\\\mathcal{L}^\\\\mathcal{C}$ in [b]) being:\\n\\n$\\\\Delta\\\\mathcal{L} = \\\\sum\\\\_{x \\\\in \\\\mathcal{A}\\\\_{\\\\text{single}}} |\\\\sum\\\\_{i \\\\in [a, b]} f(i, x)| - \\\\sum\\\\_{x \\\\in \\\\mathcal{A}\\\\_{\\\\text{single}}} \\\\sum\\\\_{i \\\\in [a, b]} |f(i, x)|$. \\n\\nI.e., for points close to 2 objects but inside only one, the authors of [b] compute the sum of absolute SDF values, whereas the present authors compute the absolute sum of SDF values. I do not have the insight to know which is best (a comparison could be interesting), but I believe that the difference in terms of overall supervision is minor (since it concerns only a small subset of points, and since other losses such as $\\\\mathcal{L}\\\\_{\\\\text{data}}$ would have a more significant influence on those).\\n\\n## W3. Medical Grounding & Clinical Applicability\\n- A key claim in this work is the enforcement of topological priors from the medical literature. However, the medical grounding is somewhat lacking. E.g., it is unclear where the authors got the 27\\\\% value used as surface contact ratio for left ventricle and left myocardium. Only one reference is provided w.r.t. heart anatomy [Buckberg et al., 2018], but the above number does not seem to actually appear in that referenced article (?). \\n- One can also wonder what would be the actual clinical use for a method that forces the reconstruction to meet statistical constraints based on healthy populations. E.g., what happens for patient with a heart or spine condition? The authors do warn that \\\"_in this paper, we restrict ourselves to healthy subjects for whom this constraint must be satisfied._\\\" [L177-178] But they do not provide any insight on the clinical impact of this limitation.\\n\\n## W4. Minor - Methodology Not Always Clear\\n- The contributions w.r.t. enforcing the contact ratio and w.r.t. enforcing the minimum distance appear severely disconnected (both in terms of methodology and in terms of actual application). The formalism of the corresponding losses could be better homogenized, e.g., by highlighting how the two losses constrain the range of valid distances (the contact loss enforce a maximum distance ; the distance loss enforce a minimum one).\\n- The redundant definition of the point sets ($\\\\mathcal{A}\\\\_{\\\\text{contact}}, \\\\mathcal{A}\\\\_{\\\\text{non-contact}}, \\\\mathcal{A}\\\\_{\\\\text{intersecting}}$) is a bit confusing. I.e., is it useful to list these sets in [L221-223] if they are formally defined afterwards, [L255-258]?\\n- The font style of the loss functions is not always consistent (.e.g, $\\\\mathcal{L}\\\\_{\\\\text{contact}}$ vs. $\\\\mathcal{L}\\\\_{contact}$).\\n\\n#### **Additional References:**\\n\\n[a] Zhang, Congyi, et al. \\\"An Implicit Parametric Morphable Dental Model.\\\" ACM Transactions on Graphics (TOG) 41.6 (2022): 1-13.\\n\\n[b] Liu, Yuchun, et al. \\\"Implicit Modeling of Non-rigid Objects with Cross-Category Signals.\\\" Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 38. No. 4. 2024.\\n\\n[c] Hassan, Mohamed, et al. \\\"Synthesizing physical character-scene interactions.\\\" ACM SIGGRAPH 2023 Conference Proceedings. 2023.\", \"questions\": \"_see **Weaknesses** for key questions/remarks._\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This work focuses on resolving the shape contacting issue using the optimization in post-processing. Two constraints are proposed to regularize the shape representation: contact ratio and minimum distance between two shapes. By keeping the desired contact ratio and keeping the distance between shapes, the reconstructed 3D shape would be more precise with less penetration artifacts.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. This work targets on exploring the shape constraints for reconstruction the organs for the human scan.\\n2. In this paper, the authors propose two shape constraints, one is contact ratio and another is the minimum distance. Several straightforward losses are introduced to keep the desired contact ratio and distance by optimization. \\n3. The writing is clear and easy to follow.\", \"weaknesses\": \"1. Utilizing the segmentation from the existing models and DeepSDF to fit the segmentation, the proposed method is specifically designed for the shape post-processing with knowledge from the previous steps.\\n2. The P_contact and P_non-contact are from the overfitted DeepSDF representation. However, if the DeepSDF representation is not correct or the segmentation is not accurate enough, the P_contact points set are not correct. The optimization result therefore can not adjust the initial prediction and optimization result will have artifact. Please discuss how the method handles cases where the initial DeepSDF representation or segmentation is inaccurate. An analysis of the method's robustness to errors in the initial inputs would benefit. \\n3. In the introduced loss function, the optimization only applied on the 3d shape representation, however, the optimized 3D shape might not consistent with the image after the optimization. Combined with last point, if the initial representation or segmentation is inaccurate, how the optimization could adjust the errors. Please include a discussion on potential methods to maintain this consistency or evaluate it quantitatively.\", \"questions\": \"Please refer to the weakness part.\\nAdditionally, the abdomen dataset should be a perfect fit for this work as abdomen region contains multiple organs and they are close to each other.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper introduces a concept to incorporate topological constraints for cardiac shape representation in the context of deep signed distance functions. The method is composed of several parts: sampling of topologically meaningful points, optimization of the sum of four loss functions, and the enforcement of minimum distance constraints. In several numerical experiments, the performance of the method is both qualitatively and quantitatively examined. In particular, an ablation study reveals that all four loss functions are essential to achieve the reported performance.\", \"soundness\": \"2\", \"presentation\": \"4\", \"contribution\": \"2\", \"strengths\": \"The presentation of the paper is excellent, and all methods are clearly described. To the best of my knowledge, I have not seen the combination of the four loss functions in this way (although, I have encountered most (probably all) as separate loss functions elsewhere).\\nMoreover, the research question itself (imposing topological constraints for DeepSDF) is highly significant.\\nFinally, the numerical experiments are systematically conducted and (partially) underline the claims of the paper.\", \"weaknesses\": \"The reconstruction method itself builds upon rather old publications by Park and Isensee, thereby completely ignoring the regularized deepSDF approaches with their substantially improved reconstruction quality (e.g., \\\"Reconstruction and completion of high-resolution 3D cardiac shapes using anisotropic CMRI segmentations and continuous implicit neural representations.\\\" by Sander et al., \\\"Sdf4chd: Generative modeling of cardiac anatomies with congenital heart defects.\\\" by Kong et al. or \\u201cShape of my heart: Cardiac models through learned signed distance functions\\u201d by Verh\\u00fclsdonk et al.). In particular, these regularized versions are proven to preserve topological constraints better. A systematic benchmark with some of these recent approaches is required instead of only considering \\\"old\\\" approaches.\\nMoreover, the design of the four loss functions is entirely heuristic, any motivation or mathematical reasoning for this particular choice is completely lacking (only the ablation study partially underlines this specific choice).\\nFinally, I suspect that the sampling requirements (300k points after 10 iterations) result in inferior run time (and maybe performance) compared to the above-mentioned regularized approaches.\", \"questions\": \"1. Have you integrated this method into regularized versions of the deep signed distance (see section weaknesses)? Can you please report on the results? Here, the integration of a Lipschitz regularization would be one possible option.\\n2. Can you provide a justification for the inclusion of these four particular loss functions beyond the ablation study? Are there particular theoretical frameworks or principles that should be applied to justify the loss function design?\\n3. What is the additional runtime caused by the sampling?\\n4. I am lacking details on the optimization in Section 3.2.3. Can you please provide them?\\n5. In Figure 8, I can hardly recognize the distribution of the topologically meaningful points near the interfaces. Can you please present this in a better way?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}" ] }
4anfpHj0wf
Unlocking Point Processes through Point Set Diffusion
[ "David Lüdke", "Enric Rabasseda Raventós", "Marcel Kollovieh", "Stephan Günnemann" ]
Point processes model the distribution of random point sets in mathematical spaces, such as spatial and temporal domains, with applications in fields like seismology, neuroscience, and economics. Existing statistical and machine learning models for point processes are predominantly constrained by their reliance on the characteristic intensity function, introducing an inherent trade-off between efficiency and flexibility. In this paper, we introduce Point Set Diffusion, a diffusion-based latent variable model that can represent arbitrary point processes on general metric spaces without relying on the intensity function. By directly learning to stochastically interpolate between noise and data point sets, our approach effectively captures the distribution of point processes and enables efficient, parallel sampling and flexible generation for complex conditional tasks. Experiments on synthetic and real-world datasets demonstrate that Point Set Diffusion achieves state-of-the-art performance in unconditional and conditional generation of spatial and spatiotemporal point processes while providing up to orders of magnitude faster sampling.
[ "Generative Model", "Diffusion Model", "Set Model", "Point Sets", "Forecasting", "Density Estimation", "Spatial", "Temporal", "Probabilistic Models" ]
Accept (Poster)
https://openreview.net/pdf?id=4anfpHj0wf
https://openreview.net/forum?id=4anfpHj0wf
ICLR.cc/2025/Conference
2025
{ "note_id": [ "mUg2AqCMUL", "emPNXZDG6I", "cv33luTTfq", "YXYGRwpGll", "WzjT2rmhGk", "TlVnAABhuY", "RaSs0wtw7e", "KdTssvmBWq", "KGSSb8bzgb", "K4YaN3kn0G", "HLqXVnjxZa", "F4WTCj4yqb", "9aodC7BNFM", "5zysn5Nr3F", "1K4ItEZHNl" ], "note_type": [ "official_review", "official_comment", "official_comment", "official_comment", "official_review", "meta_review", "official_comment", "official_comment", "official_comment", "decision", "official_review", "official_comment", "official_comment", "official_comment", "official_review" ], "note_created": [ 1730615942877, 1732689779540, 1732328969131, 1732301683867, 1730524531160, 1734812406147, 1732303050491, 1733156378478, 1732400013167, 1737524139732, 1730696413427, 1732301850163, 1732303203694, 1732302810022, 1730445834450 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission11691/Reviewer_r6U8" ], [ "ICLR.cc/2025/Conference/Submission11691/Reviewer_XRB4" ], [ "ICLR.cc/2025/Conference/Submission11691/Reviewer_mmd5" ], [ "ICLR.cc/2025/Conference/Submission11691/Authors" ], [ "ICLR.cc/2025/Conference/Submission11691/Reviewer_mmd5" ], [ "ICLR.cc/2025/Conference/Submission11691/Area_Chair_myKo" ], [ "ICLR.cc/2025/Conference/Submission11691/Authors" ], [ "ICLR.cc/2025/Conference/Submission11691/Authors" ], [ "ICLR.cc/2025/Conference/Submission11691/Reviewer_r6U8" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission11691/Reviewer_XRB4" ], [ "ICLR.cc/2025/Conference/Submission11691/Authors" ], [ "ICLR.cc/2025/Conference/Submission11691/Authors" ], [ "ICLR.cc/2025/Conference/Submission11691/Authors" ], [ "ICLR.cc/2025/Conference/Submission11691/Reviewer_8FLx" ] ], "structured_content_str": [ "{\"summary\": \"The paper proposes a Point Set Diffusion model for conditioned and unconditioned generations of point processes (spatial, temporal, and spacial-temporal) without intensity functions. The model treats the latent space of the point process as a whole and applies diffusion to learn how to generate point processes from noise (unconditioned) and conditioning masks (conditioned). At the training phase, the point process is passed through a forward process that gradually thins the original points and adds points from a noise point process. Then, a parameterized model is trained for the backward process that gradually predicts the points in the last timestep conditioned on the current timestep and thins the noise points in the current point process. After training, both conditioned and unconditioned sampling procedures are provided. Numerical experiments illustrate that the proposed Point Set Diffusion model achieves much faster sampling speed than intensity-based autoregressive models. Moreover, it outperforms several baseline autoregressive models on various SPP and STPP tasks, especially density estimation tasks.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"1. The paper is overall well-written and easy to follow. The basic concepts are introduced clearly with consistent notations. The forward process, backward process, and final sampling algorithms are well explained. Illustrations (Figure 1-3) are very clear for readers to follow the workflow of the proposed Point Set Diffusion model. The datasets and metrics are also clear in the experiment section.\\n\\n2. The idea of leveraging diffusion models to generate the whole point process is intriguing, and it is quite different from the common approaches that use autoregressive models with parameterized intensity functions that suffer from sampling speed and are restricted to forecasting tasks. Numerical results are very promising to support the efficacy of the proposed model.\", \"weaknesses\": \"1. Currently there are very few baseline algorithms, e.g., for SPP conditional generation there is only one baseline, and for STPP forecasting there are only two. It would be more convincing to compare with more baseline models, or to provide more evidence that the current baselines are already SOTA (which I believe they are).\", \"questions\": \"1. The sampling time and quality of the diffusion model are directly related to the number of forward/backward steps, which I could not find in the paper. Could the authors provide some ablation study on the number of steps, e.g., how the sampling time and quality grow with it?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"I appreciate the response from the authors. In general, some of my concerns have been addressed, but others have not, and I am not fully convinced in a few parts of the author's response.\\n\\nMy confusion about \\\"the model focuses on the first-order property of the point process\\\" has been addressed, and I can agree with the logic behind the modeling of the **Janossy density**. I understand the necessity of a conditional sampling algorithm, and the newly added example of Appendix A.10 did demonstrate that future events' distributions are influenced by prior events. I give credit to the authors for helping address my above concerns.\\n\\nHowever, what I am concerned with in my previous review is the **effectiveness** or accuracy of the conditional sampling instead of the necessity. An easy way to prove this is via simulation: given an observed history generated from a true STPP model $m^*$, the authors can first use $m^*$ to generate enough samples of sequences in the future time frame and calculate the density of points as \\\"distribution of future events\\\". Then, the authors can use their model to generate future sequences and compare the distribution of those events with the true distribution of future events obtained from the $m^*$. A synthetic experiment like this can clearly prove whether the conditional sampling really captures the ground truth.\\n\\n#######\\n\\nThe following are my comments on the discussion of the log-likelihood metric. With respect to the authors, I am afraid I cannot agree with their opinions on the log-likelihood metric:\\n1. First, the computation of the likelihood of a point process does not require the parametrization of the conditional intensity function. The intensity-based computation listed by the authors is only one of the approaches to compute the likelihood when the intensity function is available. \\n2. In fact, the derivation of the point process likelihood is closely connected with the **Janossy density** (see the derivation in section 2.4 in [1], or in Section 5.3/Definition 7.1.II in [2]). The likelihood is computed by a series of **density functions** (which is also the very original definition of any likelihood), seeing equation 6 in [1]. \\n - A few examples of calculating likelihood without parametrizing the conditional intensity function can be found in [3][4]. These pp studies also use diffusion models to sample events (although I admit that they are using diffusion in an autoregressive way, and this paper contributes to it by extending the modeling beyond autoregression by sampling a few points in parallel), and the likelihood can be computed by sampling candidate points and calculating the density of observed ground truth.\\n\\n3. If the authors are claiming that their model can learn the **Janossy density** of the point processes, I would also expect the model to have a good likelihood of the data. I can even think of a way for the authors to calculate the likelihood of a sequence: giving the observed history, generate multiple sequences, keep the first event in each sequence, and calculate the density. This is the density of the next event based on the history. The likelihood of an entire sequence is computed by iterating over all the events.\\n - I believe this evaluation procedure would cost linear complexity as others since the Point Set Diffusion is generating one sequence at a time.\\n\\nIn summary, I hold my opinion about the golden standard of likelihood in point processes and cannot agree with the authors' opinion that it is the *significant restriction* for point processes. Again, it does not require the parameterization of the conditional intensity.\\n\\n#######\\n\\nStill, I understand that the computation of likelihood will not be a main flaw of the proposed Point Set Diffusion. I can now see the value of the proposed method to the fields of point processes. Meanwhile, I still hope the authors can consider adding the synthetic data experiments and the possible evaluation of the likelihood. This would improve the quality of the paper and make it look more convincing.\\n\\nBased on all the assessments above, I decided to raise my score. \\n\\nDo the authors plan to release the code/implementation?\\n\\n--------\\n[1] Reinhart, Alex. \\\"A review of self-exciting spatio-temporal point processes and their applications.\\\" Statistical Science 33.3 (2018): 299-318.\\n\\n[2] Daryl J Daley, David Vere-Jones, et al. An introduction to the theory of point processes: volume I: elementary theory and methods. Springer, 2003\\n\\n[3] Dong, Zheng, Zekai Fan, and Shixiang Zhu. \\\"Conditional Generative Modeling for High-dimensional Marked Temporal Point Processes.\\\" arXiv preprint arXiv:2305.12569 (2023).\\n\\n[4] Yuan, Yuan, et al. \\\"Spatio-temporal diffusion point processes.\\\" Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. 2023.\"}", "{\"title\": \"Response\", \"comment\": \"Thank you, the response has completely addressed my questions, and I have improved my score to 8.\"}", "{\"comment\": \"# Response\\n\\nWe thank the reviewer for their appreciation of our work and their feedback.\\n\\nAs the raised points seem rooted in one misunderstanding of our model, we clarify it before addressing the specific comments.\\n\\n## Our model captures the joint density of point sets\\nWhen modeling Point Processes, we are interested in capturing the complex interactions between all points, which can be expressed as the **conditional** Janossy intensity (see Eq. 2). \\nHowever, as explained in the subsequent paragraph, parameterizing and sampling this intensity is not feasible in general for non-ordered spaces.\\nFor ordered point processes (TPP, STPP), most models rely on the history-dependent intensity to parameterize the density of the point process, i.e., $p(X)= \\\\prod^N_i \\\\lambda(x_i|H_{x_i}) e^{-\\\\int \\\\lambda(x|H_{x})}$, where $H_{x}$ is the conditional history up to point $x$.\\nThis introduces a factorization across time, enabling autoregressive sampling but limiting conditioning tasks to simple forecasting and enforcing sequential sampling.\\nIn contrast, our approach leverages the thinning and superposition properties to sample from any conditional Janossy intensity without explicitly parameterizing it. \\nIntuitively, one can think of it as a different factorization of the Point Process density $p(X)$, not across time, but across a latent variable process -- Point Set Diffusion.\\nThus, our model is neither restricted to inhomogeneous Point Processes, i.e., $\\\\lambda(x_i|H_{x_i})= \\\\lambda(x_i)$ nor first-order statistics but generalizes to point processes with arbitrary interactions.\\n\\n\\n### W1: Point set diffusion is not limited to \\\"first-order-statistics\\\"\\nAs noted above, our model is not limited to first-order statistics or inhomogeneous intensities. It generalizes beyond conditional intensities of ordered processes, capturing any interaction between points on the metric space. For instance, it can predict the history given the future or a time window given past and future points.\\nIn short, our model supports a broader range of intensity functions than the conditional intensities captured by standard STPP or TPP models.\\n\\n### W2: Why use the conditional sampling method (Algorithm 3.1)\\nOur model is trained unconditionally to learn the joint density of point sets, so it is non-trivial to solve conditioning tasks by leveraging our joint density parametrization. \\nWith Algorithm 3.1, we demonstrate the flexibility of our model and show how to condition our unconditionally-trained model for conditioning tasks on the metric space, subsuming forecasting, history prediction, and more complex conditioning tasks.\", \"this_algorithm_is_used_for_all_conditional_tasks_in_the_paper\": \"spatial conditioning (Sections 4.3, 4.5), temporal forecasting (Section 4.4, Appendix A.7.2, A.10).\\n\\n\\n#### Clarifying definition of $q(X_{t-1}|X_0^c)$\\n$q(X_{t-1}|X_0^c)$ refers to the Markov Chain of the forward process (Sec. 3.1), which noises the condition by thinning and adding noise across the domain. \\nApplying the conditioning mask (Algorithm 3.1, line 6) yields the noised condition. We agree that this notation may be unclear and have annotated the algorithm for clarity.\\n\\n\\n\\n### W2, W3, W4: Conditional generation for (ordered) point processes\\n\\n#### Visualization\\nWhile Figure 7\\u2019s density plots may appear similar, they are distinct. The spatial similarities are common for S(T)PPs, such as Earthquakes (Figure 7), and reflect a \\\"shared\\\" spatial pattern along tectonic plates.\\nTo better demonstrate our conditional generation, we added plots (Appendix A.10) showing the spatial density at different time points for the Earthquake dataset.\\nSince, unlike other STPP or TPP models, our approach does not parameterize the intensity of the next point given the past but models all following points, we cannot show the conditional density plots requested in W3 and W4.\\nHowever, in the plot, we show sliding forecast windows (e.g., forecasting 1/6th of the time domain at different time points, (0, 1/6, 2/6,...,5/6)), revealing that our conditional densities change in time and are influenced by prior events (e.g., earthquake aftershocks).\\n\\n\\n#### Conditional results\\nSections 4.3 and 4.4 report state-of-the-art results for conditional tasks on SPPs and STPPs. \\nFurthermore, Appendix A.7.2 compares our model to ADD-THIN [3] on their TPP forecasting task, where they recently showed state-of-the-art results. \\nNote that our model, unlike ADD-THIN, is not specifically trained for this task. \\nStill, our model closely matches or even outperforms ADD-THIN on all datasets.\"}", "{\"summary\": \"This paper proposes a diffusion-based latent variable model for general point processes on metric spaces. By learning stochastic interpolations between data and noise point sets, the model enables efficient, parallel sampling and flexible generation for complex tasks on the metric space. Experiments on synthetic and real-world datasets show that the proposed model achieves state-of-the-art results in unconditional and conditional tasks for spatial and spatio-temporal point processes.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. This paper generalizes the Add-Thin model to define a model for point processes on general metric spaces, enhancing the model's applicability and promising future prospects.\\n2. The idea is sound and well-founded. The paper is overall well-written and easy to follow.\\n3. Experiments show that the proposed model achieves state-of-the-art results on both conditional and unconditional tasks while enabling faster sampling.\", \"weaknesses\": \"1. It would be helpful to discuss the connections between the proposed model and the Add-Thin model when modeling univariate temporal point processes.\\n2. In the conditional sampling, the definition of $q(X_{t-1} | X_{0}^c)$ in line 287 was not provided.\", \"typo\": \"$X_{t+1}^{\\\\text{thin}}$ and $X_{t}^{\\\\text{thin}}$ in Eq.(9) should be $X_{t+1}^{\\\\varepsilon}$ and $X_{t}^{\\\\varepsilon}$.\", \"questions\": \"1. In the experiments, how are $\\\\alpha_t$, $\\\\beta_t$, and $T$ set?\\n\\n2. The proposed model generalizes the Add-Thin model to general metric spaces. When modeling univariate TPPs, how does the performance of the proposed model compare to Add-Thin in both unconditional and conditional sampling scenarios?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"None.\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This paper proposes an interesting diffusion-based method for modeling and sampling point sets of point process distributions. For spatio-temporal point processes (STPP) the dominant approach is to apply autoregressive models that move points across time. This work compares to those methods, showing that they can sample point sets much more efficiently by parallelizing across time. Expert reviewers felt that the work was interesting, taking a new approach than prior work, and showing some clear advantages experimentally. We would be happy for this paper to be presented at ICLR. The authors already updated their appendix to address questions raised by the reviewers. We would encourage them to continue refining the work before the final version of the paper, and adding further experiments that could not be run due to time limitations.\", \"additional_comments_on_reviewer_discussion\": \"There was a productive reviewer discussion, mainly between the authors and the most expert reviewer, Reviewer XRB4. The authors added several additional experiments to the paper and I think the discussion will also positively impact the presentation of the paper.\"}", "{\"comment\": \"# Response\\n\\nWe would like to thank the reviewer for their thoughtful and constructive feedback. \\nWe appreciate the points raised and hope the following addresses the questions and suggestions.\\n\\n### W1, Q1 & W3: Computational complexity\\nWe have added the average training runtimes to the appendix (see A.8), demonstrating that our model is comparatively fast to train. \\nNotably, our model has the fewest learnable parameters\\u2014up to two orders of magnitude fewer than the baselines\\u2014further allowing to reduce the effective compute time by running multiple models in parallel on a single GPU.\\nFurthermore, as shown in Figure 6 and discussed in Section 4.4, the computational complexity with respect to the number of points in a set remains nearly constant for our model. \\nWhile the full attention in our encoder could eventually impact scaling, this could then be mitigated by limiting the attention to a fixed context window, a practice already leveraged by the baselines to keep them tractable.\\n\\n### W2 & Q2: Hyperparameter selection and sensitivity\\nWe generally find our model to be robust across different hyperparameter values, allowing us to use the same hyperparameters for all datasets. We added the results for different numbers of diffusion steps $T$ to the appendix (see A.9) and have discussed them in our response to **reviewer r6U8**; for further discussion of the noise schedule, please also refer to our response to **reviewer mmd5**.\\n\\n### Q3: Interpretability\\nWithout ordering, it is generally not possible to effectively model or sample the conditional Janossy intensity (Eq. 2) for point processes on general metric spaces. \\nTo address this, our proposed method learns the joint density of point sets through a diffusion-based latent variable model.\\nWhile this enables modeling point processes on general metric spaces, supports efficient parallel sampling, and allows for flexible generation in complex conditional tasks, it does not permit evaluating or interpreting the conditional intensity or its parameters.\\nTherefore, for applications involving ordered point processes (STPP, TPP) that require evaluation of the conditional intensity\\u2014such as estimating the likelihood of the next point given the past\\u2014point process models that directly approximate the conditional intensity are better suited. \\nIn contrast, our method prioritizes generality, scalability, and flexibility, excelling in unconditional and conditional generation tasks and addressing key limitations of intensity-based models.\\nWe have added a small discussion of this limitation to the last paragraph of our conclusion.\"}", "{\"comment\": \"We thank the reviewer for their thoughtful response and are pleased that we resolved the confusion regarding our model's theoretical capacity.\\nWe greatly appreciate the reviewer's updated assessment and recognition of our method's value and contribution, as reflected in the raised score.\\n\\n\\n#### Synthetic forecasting experiment\\nWe believe our experiments on 17 real-world and synthetic datasets (SPPs: Sec. 4.3; STPPs: Sec. 4.4, 4.5, A.10; TPPs: A.7.2) demonstrate our model's effectiveness and accuracy on various conditioning tasks. \\nFor (S)TPPs, these evaluate forecasting accuracy across over 50 forecast windows per test set instance.\\n\\nTo complement them, we trained our model unconditionally on 1500 samples from a synthetic STPP Hawkes process (Hawkes1 setup in [1]) with a mixture of two exponential kernels and a Gaussian spatial diffusion kernel with constant variance. \\nThis setup allows us to compute the likelihoods of entire point set samples w.r.t. the ground-truth process, as we know its conditional likelihood function.\\nThe table below shows this negative log-likelihood (NLL)($\\\\downarrow$) for 50 forecast samples on the last 5\\u201340% for 200 Hawkes sequences, comparing our model to the ground-truth Hawkes and a misspecified Hawkes (same parameters, but homogeneous base-intensity 0.1 vs. 0.2).\\n| | 5% | 10% | 15% | 20% | 25% | 30% | 35% | 40% |\\n|------------------------|-------|-------|-------|-------|-------|-------|-------|-------|\\n| Ours | **0.909** | **0.943** | **0.995** | 1.043 | 1.075 | 1.127 | 1.163 | 1.222 |\\n| Hawkes (ground truth) | 0.999 | 1.015 | 1.029 | **1.022** | **1.013** | **1.009** | **1.004** | **1.021** |\\n| Hawkes (misspecified) | 1.085 | 1.140 | 1.200 | 1.236 | 1.270 | 1.312 | 1.379 | 1.442 |\\n\\nSince forecasting STPPs without access to the underlying process is inherently challenging, especially for longer horizons, our model doesn't fully match the forecasting distribution for longer periods but achieves and even surpasses the NLL of the generative process for shorter forecasts.\\n\\n#### Log-likelihood\\nWe agree that there are different approaches to computing the log-likelihood for (S)TPP models (hence our wording 'typically parameterized through the conditional intensity'), such as the normalized conditional intensity $p(x|H_{x_i})$ used in the two diffusion papers.\\nAs noted in our previous response, traditional (S)TPP likelihood evaluation is inherently autoregressive, assessing only how well models predict the next event given its ground-truth history with known failure modes and providing minimal insights for real-world applications.\\nSince the contribution of our paper is to generalize beyond ordered point sets by generating all points in parallel, we cannot evaluate this autoregressively factorized likelihood.\\nEstimating this likelihood as proposed requires $points$ x $n_{samples}$ samples from our model per test set instance, with batches of 2000 samples processed in 2\\u20133 seconds.\\nThus, sampling a reasonable number of forecasts to estimate the 3D conditional density for one dataset and seed would require multiple days, making this estimate computationally prohibitive.\\n\\nUltimately, we agree with the reviewer that likelihood computation is not a main flaw of our method, but we believe it highlights an important distinction from common (S)TPP models that warrants discussion. \\nThus, for a camera-ready version, we will extend our discussion of this matter briefly presented in the conclusion (see lines 531-536).\\n\\n#### Release of code/implementation?\\nWe will release the code with reproducible configurations of all experiments on GitHub upon acceptance.\\n\\n[1] Takahiro Omi, Naonori Ueda, and Kazuyuki Aihara. Fully neural network based model for general temporal point processes. In Advances in Neural Information Processing Systems, 2019.\"}", "{\"comment\": \"Thank you, the response addresses my question and concern. I keep my score and recommend for acceptance.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"summary\": \"This paper proposes a novel modeling approach to point processes via diffusion on point sets (discrete events), addressing the reliance on the intensity function when establishing or learning the model. It can capture the distribution of point processes and generate a series of events based on noise point sets. Meanwhile, the sampling efficiency of point set diffusion is superior. The overall presentations of both the methodology and experiments are excellent.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"2\", \"strengths\": \"The idea of using the diffusion-style model to characterize point processes is super interesting. The content is clear and well-written, making the methodology and the results accessible to the reader. The paper also covers unconditional and conditional sampling methods, which have the potential to correspond to two important questions in the point process modeling (first-order and second-order modeling). The authors also provide thorough experimentation to validate the effectiveness of the proposed model.\", \"weaknesses\": \"In my opinion, the main weakness, or the most improvement-needed part of the paper, lies in the modeling and experiments of ordered point processes:\\n\\n1. An important characteristic of the ordered point processes (TPPs or STPPs) is the dependence between future events and past events, which is not considered in the model. The proposed method seems to only consider the first-order statistics of the data (the event intensity/density), and treat these statistics at certain times or locations as fixed values to be learned by the model. For example, if the training data set contains multiple event trajectories sampled over the horizon of $[0, T]$, then the model will assume $p(T/2)$ (the event density at $T/2$) is fixed and to be learned. However, in TPPs, the $p(T/2)$ depends on the history (observation before $T/2$), and is different in each realization of the event trajectory, which violates the assumption of the diffusion model.\\n\\n2. Although a conditional sampling method is proposed in the paper (Algorithm 3.1), I am wondering about its effectiveness in practice. First, what the $q(X_{t-1}|X_{0}^{c})$ is (line 287) remains unknown. Meanwhile, the (technical/practical) reason for using this conditioning is not shown. The results in Figure 7 are not convincing enough. To me, even though the authors claim that they are solving conditioning tasks and are visualizing the predicted densities for events from different trajectories (panels at the bottom), these density plots would look similar if we overlap them with each other. In other words, I think the model only predicts an averaged event intensity over space, and it has little connection with the conditioned samples.\\n\\n3. An alternative to prove the effectiveness of conditional generation is to show the predicted intensity/density of events at different times, given a trajectory from the pinwheel dataset. This is the same idea as Figure 5 in [1]. The difference between density functions at different times is more significant and would help validate the conditional sampling method.\\n\\n4. The conditional sampling task is only experimented with in the spatial domain. An example of showing the evolution of the predicted conditional density of a pinwheel trajectory can support the claim of effective conditional sampling in an ordered (temporal) domain.\\n\\n5. I am also concerned that there is no log-likelihood metric reported in the paper. The metrics used in the paper are about the first-order characteristics of the data, on which I believe the proposed Point Set Diffusion can perform well. However, they cannot fully reflect the model's fit to the data when second-order data dependencies are involved (e.g., in ordered point processes). On the other hand, the log-likliehood is still the golden standard to suggest the model's goodness-of-fit to the data when it comes to conditional models or tasks [2][3]. Other point process studies that use the diffusion model will also report the data log-likelihood when evaluating the model [4][5]. I am curious about the proposed model's performance on the log-likelihood metric.\\n\\nAgain, I acknowledge and respect the authors' contribution to the proposed method, and I hope the above questions can be properly answered or addressed.\\n\\n---\\n[1] Chen, Ricky TQ, Brandon Amos, and Maximilian Nickel. \\\"Neural Spatio-Temporal Point Processes.\\\" International Conference on Learning Representations.\\n\\n[2] Daryl J Daley, David Vere-Jones, et al. An introduction to the theory of point processes: volume I: elementary theory and methods. Springer, 2003\\n\\n[3] Reinhart, Alex. \\\"A review of self-exciting spatio-temporal point processes and their applications.\\\" Statistical Science 33.3 (2018): 299-318.\\n\\n[4] Dong, Zheng, Zekai Fan, and Shixiang Zhu. \\\"Conditional Generative Modeling for High-dimensional Marked Temporal Point Processes.\\\" arXiv preprint arXiv:2305.12569 (2023).\\n\\n[5] Yuan, Yuan, et al. \\\"Spatio-temporal diffusion point processes.\\\" Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. 2023.\", \"questions\": \"1. In Figure 5, can the authors show the predicted density of different trajectories in the same masked area?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"### W5: Log-likelihood metric\\nThe log-likelihood (LL) measures the likelihood of the next event given its history, typically parameterized through the conditional intensity. \\nSince our model does not parameterize this conditional intensity\\u2014unlike the diffusion models cited in the review\\u2014reporting the LL is not applicable.\\n\\nThat said, we would like to respectfully challenge the notion of the LL as the gold standard for evaluation.\\nThe LL is computed as $\\\\sum^N_i\\\\lambda(x_i|H_{x_i}) - \\\\int \\\\lambda(x_i|H_{x_i})$, with the integral spanning space and time for STPPs.\\nWhile the LL is a natural metric for STPP baselines trained to optimize it, it is evaluated using the ground truth history, primarily reflecting the next-event prediction but not the quality of generated samples or forecasts in real-world applications. \\nFurther, the LL has known failure modes, and models with high LL can still produce samples significantly different from the training data [5], an issue worsened by error accumulation in autoregressive sampling. \\nThis issue has been raised by different PP papers [1][2][3], with [1] even stating that \\\"the NLL is mostly irrelevant as a measure of error in real-world applications.\\\" \\nSimilarly, probabilistic forecasting has replaced LL as the \\\"gold standard\\\" in time series [4][6].\\nAdditionally, the reported LL directly depends on model-specific approximations and parametrizations, complicating and making a fair comparison across (S)TPP models error-prone due to differing approximations and implementations.\\n\\nIn contrast, our experiments follow [2] and [3] by comparing the distributions of point set samples for different tasks, independent of the implementation of each model.\\nIt is very important to state that the applied metrics capture more than just the first-order characteristics of the data. \\nFor the unconditional task, we report the Wasserstein distance between the count distributions and the maximum mean discrepancy. \\nThis kernel-based statistic test compares the two distributions based on a sample-based distance metric (CD) (for further details on the MMD for point processes, please also refer to Appendix E.2 of [2]).\\nGiven the ground truth target in the conditional tasks, we do not need to compare distributions of point processes. \\nHence, we leverage the MAE for the difference in the number of points and the Point Process Wasserstein Distance, again a distance between two distributions as each instance of a point process is a stochastic process.\\n\\nLastly, restricting evaluations to one-step-ahead LL would limit model design to parameterizations of a conditional intensity function for which we can efficiently or approximately compute the integral\\u2013in our opinion, a significant restriction for the field of point processes.\\n\\n\\n[1] Shchur, O., T\\u00fcrkmen, A. C., Januschowski, T., & G\\u00fcnnemann, S. \\\"Neural temporal point processes: A review.\\\" IJCAI (2021)\\n\\n[2] Shchur, O., Gao, N., Bilo\\u0161, M., & G\\u00fcnnemann, S. \\\"Fast and flexible temporal point processes with triangular maps.\\\" NeurIPS (2020)\\n\\n[3] L\\u00fcdke, D., Bilo\\u0161, M., Shchur, O., Lienen, M., & G\\u00fcnnemann, S. Add and thin: Diffusion for temporal point processes. NeurIPS, (2023)\\n\\n[4] Tilmann Gneiting and Matthias Katzfuss. Probabilistic forecasting. Annual Review of Statistics and Its Application, (2014)\\n\\n[5] Theis, Lucas, A\\u00e4ron van den Oord, and Matthias Bethge. \\\"A note on the evaluation of generative models.\\\" ICLR (2016)\\n\\n[6] Alexandrov et al. GluonTS: Probabilistic and neural time series modeling in Python. JMLR 2020\"}", "{\"comment\": \"# Response\\n\\nWe appreciate the reviewer\\u2019s detailed evaluation and feedback on our work and respond to the comments below.\\n\\n### W2: Definition of $q(X_{t-1}|X_0^c)$\\n$q(X_{t-1}|X_0^c)$ in Algorithm 1 refers to the Markov Chain of the forward process (Section 3.1), which intuitively noises the condition by thinning and adding noise on the whole domain.\\nThen by applying the conditioning mask (line 6), we obtain the noised condition.\\nWe agree that the brevity of this notation might be hard to follow, and we annotated the lines in the algorithm accordingly.\\n\\n\\n### Typos\\nThanks for pointing out the typos; we have adjusted the manuscript accordingly.\\n\\n### W1 & Q2: Connection and performance difference to ADD-THIN\\nFirst, ADD-THIN also leverages the thinning and superposition properties to define a diffusion process for TPPs, where they mix the thinning and superposition in their noising process so that it consists of the superposition of $T+1$ point sets with different intensity functions, where even added points can be removed again, which significantly complicates the posterior and introduces redundant steps. \\nIn contrast, our model disentangles the superposition and thinning to attain two independent processes to allow for more explicit control and define the diffusion model independent of the intensity function as a stochastic interpolation of two point sets.\\nFurther, the parametrization of ADD-THIN is specific to TPPs and directly leverages the ordering of points (temporal embeddings, inter-event times, convolutional layers), while POINT SET DIFFUSION is agnostic to the ordering of points, making it applicable for modeling the general class of point processes on any metric space, including for example SPPs.\\nLastly, ADD-THIN needs to be explicitly trained for specific conditioning tasks, while we show how, after training, our unconditional POINT SET DIFFUSION model can be conditioned for arbitrary and unknown conditioning tasks on the metric space.\\nWe discuss these connections in our related work section in more generality.\\n\\nSecond, to show the performance difference, we ran all TPP experiments from ADD-THIN with our model and report them in appendix A7.\\nAs can be seen, we are able to match the SOTA performance of ADD-THIN on TPPs with our more general setting.\\nIt is especially noteworthy that our unconditionally trained model can surpass the conditionally trained ADD-THIN model on their forecasting task, which shows that our model can effectively capture the interaction between points by directly modeling the joint density.\\n\\n### Q1: Noise schedule\\n$T$ is set to 100 steps, where we refer the reviewer to our discussion of the impact of $T$ in our answer to **reviewer r6U8**.\\nRegarding $\\\\alpha$ and $\\\\beta$, we have found $\\\\bar{\\\\alpha}_t = 1-\\\\bar{\\\\beta}_t$ to be effective since it ensures a direct interpolation between the two point sets, which if $\\\\int_A \\\\lambda^{\\\\epsilon} = E[N(A)]$ ensures a constant expected number of points throughout the process.\\nRegarding the choice of a noise schedule, we initially experimented with a linear and a cosine schedule, where we found the cosine schedule to work marginally better.\\nHowever, the specifics of the noise schedule are an interesting direction for future work to explore, especially since most (continuous) noise schedules were designed for continuous Gaussian diffusion models focusing on the noise-to-signal ratio of a continuous variable.\\nIn contrast, our diffusion process is distinctively more discrete (i.e., thinning and superposition process), possibly opening a new pathway for future work.\"}", "{\"comment\": \"# Response\\n\\nWe want to thank the reviewer for their feedback and appreciation of our work and address their two points.\\n\\n### W1: SPP baselines\\nWe agree that while the very popular (Log-Gaussian) Cox process and the Regularized Model proposed at NeurIPS 2019 are highly relevant baselines, comparisons to additional SPP baselines would be valuable.\\nHowever, the unordered nature of SPPs does not permit direct and effective modeling or sampling of the conditional Janossy intensity, thereby precluding the application of traditional point process models designed for ordered spaces.\\nTo the best of our knowledge, there exists no other SPP model capable of capturing point-to-point interactions that underlie the unconditional or conditional SPP tasks.\\nWe hope this explanation clarifies our choice of baselines and emphasizes the challenges in identifying comparable methods within the scope of this work.\\n\\n\\n### Q1: Trade-off between sampling time and quality $(T)$\\n\\n| $T$ (Diffusion Steps) | SL (avg, Standard Error) | MMD (avg, Standard Error)|\\n|---------------------------|-----------------|-----------------|\\n| 20 | 0.018 \\u00b1 0.002 | 0.020 \\u00b1 0.0015 |\\n| 50 | 0.017 \\u00b1 0.002 | 0.020 \\u00b1 0.0002 |\\n| 100 | 0.014 \\u00b1 0.002 | 0.018 \\u00b1 0.0012 |\\n| 200 | 0.015 \\u00b1 0.001 | 0.018 \\u00b1 0.0005 |\\n\\nWe have used $T=100$ for all experiments and added a sentence to the hyperparameter paragraph in the Appendix. \\n\\nThe sampling time scales linearly with the number of diffusion steps $T$. \\nTo provide insight into how the number of steps affects sample quality, we have run a hyperparameter study for the unconditional STPP experiment on the validation set of the Earthquake dataset, evaluating $T \\\\in \\\\{ 20, 50, 100, 200 \\\\}$, averaged over three random seeds.\\nOur findings indicate that while fewer diffusion steps result in reduced sample quality, $T = 100$ strikes a good balance, already matching and even surpassing the quality observed at $T = 200$. \\nAlthough this result may seem counterintuitive to those familiar with standard Gaussian diffusion models, it highlights a key distinction of our approach: unlike Gaussian diffusion processes, our model employs inherently discrete Markov steps\\u2014specifically, the superposition and thinning of point sets with fixed cardinality.\\nAs a result, only a limited number of points can be added or removed over $T$ steps, imposing a natural ceiling on how much additional steps can improve sample quality.\"}", "{\"summary\": \"This paper proposes a novel diffusion-based approach to model point processes without relying on traditional intensity functions. This model is characterized by its ability to efficiently and flexibly generate point sets through stochastic interpolation between data and noise sets. Experiments on synthetic and real-world datasets demonstrate that the model achieves state-of-the-art performance in generating spatial and spatiotemporal point processes, significantly outperforming existing methods in terms of speed and accuracy of sample generation.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The approach of modeling point processes using the diffusion model is interesting.\", \"Efficient sampling is achieved by making effective use of thinning.\", \"The effectiveness of the proposed method is evaluated on artificial and real data.\", \"The manuscript is well-written.\"], \"weaknesses\": [\"There is not enough discussion about computational complexity.\", \"Not very clear on how to set hyperparameters.\", \"No mention of the effectiveness of the method with respect to the amount of data.\", \"No discussion of limitation.\"], \"questions\": [\"Can you add a discussion on learning time? How does the computational complexity increase, especially with more data points?\", \"How can hyperparameters (e.g. number of diffusion steps T or noise scheduling) be determined?\", \"Please tell me more about the limitation of the proposed method. For example, how robust is the proposed method in situations where there is little data? Also, will the interpretability of the proposed method be lower than parametric methods (e.g., DNN-based Hawkes processes), or will the number of sensitive hyperparameters increase by using diffusion models as a base?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}" ] }
4aWzNhmq4K
Choose Your Anchor Wisely: Effective Unlearning Diffusion Models via Concept Reconditioning
[ "Jingyu Zhu", "Ruiqi Zhang", "Licong Lin", "Song Mei" ]
Large-scale conditional diffusion models (DMs) have demonstrated exceptional ability in generating high-quality images from textual descriptions, gaining widespread use across various domains. However, these models also carry the risk of producing harmful, sensitive, or copyrighted content, creating a pressing need to remove such information from their generation capabilities. While retraining from scratch is prohibitively expensive, machine unlearning provides a more efficient solution by selectively removing undesirable knowledge while preserving utility. In this paper, we introduce \textbf{COncept REconditioning (CORE)}, a simple yet effective approach for unlearning diffusion models. Similar to some existing approaches, CORE guides the noise predictor conditioned on forget concepts towards an anchor generated from alternative concepts. However, CORE introduces key differences in the choice of anchor and retain loss, which contribute to its enhanced performance. We evaluate the unlearning effectiveness and retainability of CORE on UnlearnCanvas. Extensive experiments demonstrate that CORE surpasses state-of-the-art methods including its close variants and achieves near-perfect performance, especially when we aim to forget multiple concepts. More ablation studies show that CORE's careful selection of the anchor and retain loss is critical to its superior performance.
[ "Machine Unlearning", "Diffusion Models." ]
Reject
https://openreview.net/pdf?id=4aWzNhmq4K
https://openreview.net/forum?id=4aWzNhmq4K
ICLR.cc/2025/Conference
2025
{ "note_id": [ "naaxvQsAv1", "lV2sT3gof9", "dH992UsLjO", "cQbwnC2fZO", "XfsHUN7pyX", "WqKPa9CEzV", "KgINPpBttc", "I3fwFC2rV9", "6LbjkhNBpN", "4JQ9MO4Emn" ], "note_type": [ "official_review", "official_comment", "comment", "official_comment", "decision", "official_comment", "official_review", "meta_review", "official_review", "official_comment" ], "note_created": [ 1729197976083, 1732845266326, 1732261416521, 1732795163115, 1737523989477, 1732794905840, 1730352864412, 1733896695571, 1730641242367, 1732795294366 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission9536/Reviewer_ywC8" ], [ "ICLR.cc/2025/Conference/Submission9536/Reviewer_Adrx" ], [ "~Finn_Carter1" ], [ "ICLR.cc/2025/Conference/Submission9536/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission9536/Authors" ], [ "ICLR.cc/2025/Conference/Submission9536/Reviewer_Adrx" ], [ "ICLR.cc/2025/Conference/Submission9536/Area_Chair_2dFf" ], [ "ICLR.cc/2025/Conference/Submission9536/Reviewer_3Vou" ], [ "ICLR.cc/2025/Conference/Submission9536/Authors" ] ], "structured_content_str": [ "{\"summary\": \"This paper introduces concept reconditioning (CORE), a simple yet effective approach for unlearning diffusion models. By guiding the noise predictor conditioned on forget concepts towards an anchor generated from alternative concepts, CORE surpasses state-of-the-art methods including its close variants and achieves nearperfect performance, especially when CORE aim to forget multiple concepts. The difference between CORE with other existing approaches is the choice of anchor and retain loss.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. This paper produces COncept REconditioning (CORE), a new efficient and effective unlearning method on diffusion models.\\n2. Extensive tests on UnlearnCanvas demonstrate that CORE surpasses existing baselines, achieving near-perfect scores and setting new state-of-the-art performance for unlearning diffusion models. CORE also exhibits strong generalization in unlearning styles.\\n3. The ablation studies in paper show that the benefits of using a fixed, non-trainable target noise over other unlearning methods.\", \"weaknesses\": \"1. The entire paper feels quite redundant. The related work section and Chapter 2 cover the same material. The content after line 294 in Section 3.2 seems to repeat what was mentioned earlier.\\n2. The paper mentions various unlearning concepts, such as privacy and explicit content, but in practice, it only focuses on style. The paper claims generalization as one of its contributions, so how is this demonstrated? Or is CORE only applicable to style unlearning?\\n3. The paper compares many unlearning methods, but there is only one figure (Figure 2) showing the actual results, and each concept has just one result. The presentation of the outcomes is too sparse. Although the tables show some differences between the models, I still think some of the redundant content could be reduced to include more actual results.\\n4. In addition to the fact that the methods for removing concepts mentioned in the paper are not comprehensive, there are also methods described in references [1] and [2].\\n\\u30101\\u3011.Ni Z, Wei L, Li J, et al. Degeneration-tuning: Using scrambled grid shield unwanted concepts from stable diffusion[C]//Proceedings of the 31st ACM International Conference on Multimedia. 2023: 8900-8909.\\n\\u30102\\u3011.Patrick Schramowski, Manuel Brack, Bj\\u00f6rn Deiseroth, and Kristian Kersting. 2023. Safe latent diffusion: Mitigating inappropriate degeneration in diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 22522\\u201322531.\", \"questions\": \"1.How does the CORE method perform on other content, such as specific entities or specific concepts?\\n2.Why can't CORE be directly applied to SD1.5 and instead requires fine-tuning on UnlearnCanvas? From my personal experience, fine-tuning SD1.5 leads to significant changes in its performance, and unlearning on a fine-tuned model makes it relatively easier to control its performance on other non-unlearning concepts. However, this shouldn't reflect the actual scenario.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your detailed response. While you emphasize the innovations of the CORE method, I still have reservations about its actual contributions. In particular, the lack of sufficient experimental support for comparison with advanced methods makes evaluation challenging. As far as I know, both SPM and UCE provide good open-source code, making it theoretically feasible to conduct experiments based on that code. Additionally, regarding the discussion of concept retention, the significant changes evident in Figure 2 lead me to disagree with your assertion of good retention. Given that the author has not adequately addressed my concerns, I will maintain my rating.\"}", "{\"title\": \"Lack of recent related works\", \"comment\": \"It seems that several recent highly related works [1,2,3] are ignored.\\n\\n[1] Separable Multi-Concept Erasure from Diffusion Models\\n\\n[2] MACE: Mass Concept Erasure in Diffusion Models\\n\\n[3] Receler: Reliable Concept Erasing of Text-to-Image Diffusion Models via Lightweight Erasers\"}", "{\"title\": \"Rebuttal-1\", \"comment\": \"Q1: The proposed method appears rather trivial. The author simply presents a pairing method of anchor and forget concepts (either from the retain set or other forget concepts) within the unlearning objective of Concept Ablation (CA)[1]. This is highly engineering-focused and lacks adequate innovation. The proposed retaining loss only transitions from predicting Gaussian noise to aligning with the prediction of the pretrained model. Although experimentally proven effective by the author as indicated in Table 3, the author does not discuss this aspect in sufficient depth, and it is regarded as a relatively minor improvement.\", \"a1\": \"Thank you for your feedback. While our method (CORE) is related to Concept Ablation (CA), it introduces several key innovations that make it fundamentally different:\\n\\n1. Fixed Anchor Concept: In our unlearning loss, we fix an anchor concept and compute the error between the unlearned model's output and the fixed pretrained diffusion model's output. In contrast, CA computes the error between the unlearned model and itself. Our approach is based on the intuition that a fixed target provides stability during training.\\n\\n2. Retain Loss: We replace the Gaussian random vector with the prediction from the pretrained model. This aligns with statistical intuition that using an estimated parameter can sometimes yield better performance than using a random one [1,2]. [1]. When is the estimated propensity score better? high-dimensional analysis and bias correction. [2]. A puzzling phenomenon in semiparametric estimation problems with infinite-dimensional nuisance parameters\\n\\n3. One-to-One Mapping Scheme: We design a one-to-one mapping between forget concepts and retain concepts, which significantly outperforms traditional pairing schemes used in CA and other methods, especially when forgetting multiple concepts (see Table 4).\\n\\nThese design choices not only differentiate CORE from traditional unlearning methods but also contribute to its superior performance, as evidenced by our experimental results.\", \"q2\": \"There is a deficiency in the comparison with some state-of-the-art methods in the experiments [2, 3, 4].\", \"a2\": \"Thank you for bringing this up. We attempted to include the SPM algorithm from [2] and the UCE algorithm from [4] using the UnlearnCanvas codebase. However:\", \"uce\": \"The images generated by the unlearned model using UCE were vague and lacked meaningful content, performing worse than reported in the UnlearnCanvas paper. Therefore, we did not include these results in our submission.\", \"spm\": \"Running SPM required significantly more computational resources\\u2014approximately 30 times more than CA or SalUn, as reported in the UnlearnCanvas paper. Due to these constraints, we could not run the full SPM algorithm for unlearning 6 or 25 concepts.\\n\\nMoreover, the UnlearnCanvas paper indicates that SPM and UCE are outperformed by other baselines like EDiff, CA, SalUn, and ESD, which we have included in our comparisons. We believe our selection of strong baselines provides a fair evaluation, and our proposed algorithm demonstrates superior performance against them.\", \"q3\": \"The experiments lack comparisons with more models. For example, SD v1.4, which is commonly employed by previous methods, and larger models like SD - XL. Additionally, there is a lack of results validating the retaining effect on large-scale datasets, such as COCO - 30K.\", \"a3\": \"We appreciate your suggestion. However, the UnlearnCanvas codebase supports only SD v1.5. Testing algorithms on the UnlearnCanvas benchmark requires fine-tuning a diffusion model on a dataset comprising 50 styles and 20 objects, with 20 images for each combination\\u2014a process demanding substantial computational resources that were beyond our capacity. UnlearnCanvas provides a fine-tuned SD v1.5 model, which we used to apply our method and the baselines.\", \"q4\": \"The visualization results do not utilize the commonly used prompts adopted by previous works [1][2], making it difficult to demonstrate advantages over previous efforts. Moreover, the retained concepts also exhibit changes in the image content, as seen in Figure 2.\", \"a4\": \"Thank you for this insight. In our original submission, we used the prompts provided by UnlearnCanvas for fair comparison. We have since conducted additional experiments using the general prompts adopted in [1] to compare our method with the baselines. We will include these results in the next version.\\n\\nBriefly, when unlearning 25 concepts with general prompts, our method achieved a total score of 371.11. The baselines scored as follows: ESD (319.42), EDiff (316.03), CA-model (325.14), CA-noise (319.97), and SalUn (290.14). These results demonstrate that CORE significantly outperforms strong baselines even with more general prompts.\\n\\nRegarding the retained concepts, our observations indicate that they are well preserved by our method, without significant changes in image content. We will include additional images in the next version to clarify this point.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Rebuttal\", \"comment\": \"Q1: The visual results presented are insufficient. I am particularly interested in scenarios where the forget concepts and the retain concepts contain the same object but differ in adjectives. For instance, in Figure 3, \\\"Dadaism Cat\\\" is expected to be forgotten, while \\\"Vibrant Flow Cat\\\" should be retained. Could you provide additional visual results for this kind of situation?\", \"a1\": \"Thank you for your observation. We have already included a comparison between \\\"Dadaism Cat\\\" (the concept to forget) and \\\"Vibrant Flow Cat\\\" (the concept to retain) in Figure 3. We will add more visual results of similar scenarios in the next version. Our experiments focus on unlearning specific styles across all objects in the dataset. Therefore, our visual presentations primarily compare different styles to demonstrate that our algorithm can effectively unlearn various objects under the same style.\", \"q2\": \"Ablation study. Without the retain loss, how much worse will the model be?\", \"a2\": \"Thank you for your question. Without the retain loss, the CORE algorithm's performance decreases significantly. In a test where we aimed to forget six concepts, the model without the retain loss achieved the following scores: UA of 87.5, IRA of 99.5, CRA of 69, and SFID of 83.7, and the total score is 339.2. This is notably lower than the 387.06 score achieved by the original CORE algorithm (as shown in Table 1 of our submission).\\n\\nIncluding a retain loss term is standard in machine unlearning methods for both language models and diffusion models [1,2,3,4], as it is essential for good performance. Our algorithm introduces innovations in both the unlearn loss and the retain loss. We demonstrate that our retain loss outperforms those used in prior work (see the last row of Table 3). [1] Ablating Concepts in Text-to-Image Diffusion Models. [2] Selective amnesia: A continual learning approach to forgetting in deep generative models. [3] Salun: Empowering machine unlearning via gradient-based weight saliency in both image classification and generation [4].\\u201cForget-me-not: Learning to forget in text-to-image diffusion models.\", \"q3\": \"In line 230, the statement \\\"In the unlearning objective, p_a acts as an anchor concept to recondition images from the forget set onto\\\" appears incomplete. It seems that there is a missing component following the word \\\"onto.\\\"\", \"a3\": \"We thank the reviewer for their observation. We will modify this sentence in the next version to clarify it.\"}", "{\"summary\": \"This work proposes Concept REconditioning (CORE), a simple yet effective approach for unlearning harmful, sensitive, or copyrighted content from diffusion models. The key contribution lies in the selection of anchor concepts and the retain loss. Extensive experiments demonstrate that CORE surpasses state-of-the-art methods.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"1\", \"strengths\": \"1. The paper writing is well.\\n2. Machine unlearning is an interesting topic and studing how to unlearn some concepts in SD model is important.\", \"weaknesses\": \"1. The proposed method appears rather trivial. The author simply presents a pairing method of anchor and forget concepts (either from the retain set or other forget concepts) within the unlearning objective of Concept Ablation (CA)[1]. This is highly engineering-focused and lacks adequate innovation. The proposed retaining loss only transitions from predicting Gaussian noise to aligning with the prediction of the pretrained model. Although experimentally proven effective by the author as indicated in Table 3, the author does not discuss this aspect in sufficient depth, and it is regarded as a relatively minor improvement.\\n\\n2. There is a deficiency in the comparison with some state-of-the-art methods in the experiments [2, 3, 4].\\n\\n3. The experiments lack comparisons with more models. For example, SD v1.4, which is commonly employed by previous methods, and larger models like SD - XL. Additionally, there is a lack of results validating the retaining effect on large-scale datasets, such as COCO - 30K.\\n\\n4. The visualization results do not utilize the commonly used prompts adopted by previous works [1][2], making it difficult to demonstrate advantages over previous efforts. Moreover, the retained concepts also exhibit changes in the image content, as seen in Figure 2.\", \"references\": \"[1] Ablating Concepts in Text-to-Image Diffusion Models\\n[2] One-Dimensional Adapter to Rule Them All: Concepts, Diffusion Models and Erasing Applications\\n[3] To generate or not? safety-driven unlearned diffusion models are still easy to generate unsafe images... for now\\n[4] Unified Concept Editing in Diffusion Models\", \"questions\": \"Please refer to the weakness part.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This paper presents an unlearning method for diffusion models, titled COncept REconditioning (CORE). Reviewer concerns remain unaddressed, particularly regarding the lack of comprehensive experimental comparisons with SOTA methods, ambiguities in the key concept of retention, and an incomplete review of related literature. Moreover, it has been noted that the paper has already been accepted at the NeurIPS 2024 SafeGenAI workshop. In accordance with ICLR\\u2019s submission policy, the recommendation is to reject this submission.\", \"additional_comments_on_reviewer_discussion\": \"The paper has already been accepted by the NeurIPS 2024 SafeGenAI workshop.\"}", "{\"summary\": \"This paper introduces a novel method, termed CORE, designed for the unlearning of diffusion models by selectively eliminating undesirable knowledge. The proposed approach includes an innovative strategy for anchor selection and a newly formulated retain loss. Experimental results demonstrate the method's superior performance compared to existing techniques.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The experimental setup is well-structured, effectively addressing the majority of my inquiries regarding this method.\\n2. The performance outcomes appear to be satisfactory. \\n3. The writing is commendable; the method is articulated clearly, and its key distinctions from other approaches are clearly stated.\", \"weaknesses\": \"1. The visual results presented are insufficient. I am particularly interested in scenarios where the forget concepts and the retain concepts contain the same object but differ in adjectives. For instance, in Figure 3, \\\"Dadaism *Cat*\\\" is expected to be forgotten, while \\\"Vibrant Flow *Cat*\\\" should be retained. Could you provide additional visual results for this kind of situation?\\n2. Ablation study. Without the retain loss, how much worse will the model be?\\n3. In line 230, the statement \\\"In the unlearning objective, p_a acts as an anchor concept to recondition images from the forget set onto\\\" appears incomplete. It seems that there is a missing component following the word \\\"onto.\\\"\", \"questions\": \"The explanation of the key differences from other methods, along with the experimental results, solve most of my questions. I have no further questions aside from those mentioned before.\", \"flag_for_ethics_review\": \"['Yes, Legal compliance (e.g., GDPR, copyright, terms of use)']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Rebuttal-1\", \"comment\": \"Q1: The entire paper feels quite redundant. The related work section and Chapter 2 cover the same material. The content after line 294 in Section 3.2 seems to repeat what was mentioned earlier.\", \"a1\": \"We thank the reviewer for their careful reading. We will modify the presentation in the next version and make it more precise and more clear.\", \"q2\": \"The paper mentions various unlearning concepts, such as privacy and explicit content, but in practice, it only focuses on style. The paper claims generalization as one of its contributions, so how is this demonstrated? Or is CORE only applicable to style unlearning?\", \"a2\": \"Thank you for your observation. While our experiments focus on unlearning styles\\u2014a common benchmark for diffusion models in image generation\\u2014we believe our method can be applied to other types of content, such as removing sensitive or private content from images. Extending CORE to these areas is an important future direction, though it is beyond the scope of this submission.\\n\\nOur claim of strong generalization refers to the model's ability to unlearn styles across unseen objects. To test this, we divided all objects into a training set (used for unlearning) and a test set (used for evaluation). Our results show that CORE effectively unlearns styles even on unseen objects, demonstrating strong generalization capabilities.\\n\\nAdditionally, we applied CORE to the I2P benchmark, which includes sensitive images (e.g., those containing nudity). The results, presented in the appendix, show that CORE can effectively remove sensitive content, indicating its applicability beyond style unlearning.\", \"q3\": \"The paper compares many unlearning methods, but there is only one figure (Figure 2) showing the actual results, and each concept has just one result. The presentation of the outcomes is too sparse. Although the tables show some differences between the models, I still think some of the redundant content could be reduced to include more actual results.\", \"a3\": \"Thank you for your suggestion. We have included more results in the appendix (see Figures 3 and 4) to demonstrate the effectiveness of our method. While we have shown a subset of the generated images, we believe they are representative of our method's overall performance. The quantitative results in Tables 1\\u20134 are computed over all styles and objects, providing a comprehensive evaluation. We will consider reducing redundant content to include more visual results in the next version.\", \"q4\": \"In addition to the fact that the methods for removing concepts mentioned in the paper are not comprehensive, there are also methods described in references [1] and [2].\", \"a4\": \"Thank you for highlighting additional related work. Reference [1] introduces the technique of scrambled grids in the training loss, and [2] (Safe Latent Diffusion, SLD) modifies the latent space to improve unlearning performance. We will discuss these methods in more detail in the next version.\", \"q5\": \"How does the CORE method perform on other content, such as specific entities or specific concepts?\", \"a5\": \"Thank you for your question. While our submission focuses on the UnlearnCanvas benchmark\\u2014a comprehensive evaluation framework for unlearning methods\\u2014we also conducted experiments on the I2P benchmark, which includes unsafe and sensitive images. The results, presented in the appendix, show that CORE effectively unlearns sensitive and unsafe content, such as images containing nudity. This demonstrates that CORE can be applied to a broader range of tasks beyond style unlearning.\", \"q6\": \"Why can't CORE be directly applied to SD1.5 and instead requires fine-tuning on UnlearnCanvas? From my personal experience, fine-tuning SD1.5 leads to significant changes in its performance, and unlearning on a fine-tuned model makes it relatively easier to control its performance on other non-unlearning concepts. However, this shouldn't reflect the actual scenario.\", \"a6\": \"Thank you for your question. The fine-tuning and unlearning scheme was implemented by UnlearnCanvas. They fine-tuned the Stable Diffusion v1.5 model on their dataset to enable the model to generate images with specific styles and objects included in the benchmark. Without fine-tuning, the original SD v1.5 model performs poorly in generating images with those styles.\\n\\nStarting the unlearning process from a fine-tuned model ensures that we evaluate the unlearning methods on a model that has already learned the targeted concepts. We agree that fine-tuning can change a model's performance, and unlearning on a fine-tuned model might make it easier to control performance on other concepts. However, this approach allows for a fair and consistent evaluation across different unlearning methods within the context of the UnlearnCanvas benchmark.\"}" ] }
4a9doRh3Jv
Fast and Slow Generating: An Empirical Study on Large and Small Language Models Collaborative Decoding
[ "Kaiyan Zhang", "Jianyu Wang", "Ning Ding", "Biqing Qi", "Ermo Hua", "Xingtai Lv", "Bowen Zhou" ]
Large Language Models (LLMs) exhibit impressive capabilities across various applications but encounter substantial challenges such as high inference latency, considerable training costs, and the generation of hallucinations. Collaborative decoding between large and small language models (SLMs) presents a promising strategy to mitigate these issues through methods including speculative decoding, contrastive decoding, and emulator or proxy fine-tuning. However, the specifics of such collaborations, particularly from a unified perspective, remain largely unexplored. Inspired by dual-process cognitive theory, we propose a unified framework in this paper, termed Fast and Slow Generating (FS-GEN). Within this framework, LLMs (sometimes along with SLMs) are categorized as System 2 (slow and deliberate), while independent SLMs are designated as System 1 (fast and intuitive). We provide a comprehensive analysis of these collaborative methodologies, elucidating their common properties and shedding light on the differential knowledge capabilities of System 2 versus System 1 through the FS-GEN framework. Our findings indicate that only a small proportion of collaborative interactions (approximately less than 20\% in most instances) are necessary across various methods. These interactions between System 1 and System 2 conform to a scaling law related to the parameter ratios, enabling predictable collaboration. Furthermore, we explore the specific conditions under which collaboration proves most effective, particularly from an uncertainty perspective, offering novel insights that may guide future optimization efforts. Our research underscores that the fundamental distinction between System 1 and System 2 lies in the uncertainty of next token predictions, where interventions by System 2 are crucial to support System 1. We provide code for reproduction: https://anonymous.4open.science/r/ICLR2025_Anonymous-127D/README.md
[ "Large Language Models", "Collaborative Decoding" ]
Reject
https://openreview.net/pdf?id=4a9doRh3Jv
https://openreview.net/forum?id=4a9doRh3Jv
ICLR.cc/2025/Conference
2025
{ "note_id": [ "x1QWaaTOSe", "vpnFY81sh7", "sedAWR8cTb", "rQt0pDLTk0", "pFDna3M21t", "lBtecNEmcS", "jtUyzkDKm0", "dwSzy81WWg", "c8IOgfUvov", "bCORS70gwN", "VCyfaXIWME", "TEVeoPsuX7", "T5jL2o1aRv", "Si1zURQN7a", "Rycjt56JXu", "LjJK2hL7Dr", "LUDv3mAYjK", "JJqFwSzzrl", "H9W0GKxNnh", "F7TXgAFk3L", "DdrrFsPpVd", "DPXUTQjNeQ", "BfumMDZOcK", "87Ek9xhLwN", "62GTW2VtFE", "5l5v9He67h", "5hUKv0bSDi", "2ba5eCaZHt", "1SD2tJFIzC" ], "note_type": [ "official_comment", "official_comment", "decision", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732273136052, 1732274762691, 1737523410038, 1732869195605, 1730732507189, 1732273206314, 1730920328840, 1732525770663, 1732274688931, 1732162506733, 1732273228522, 1732814497556, 1732274720885, 1730914580920, 1732516999644, 1732274600434, 1734712675904, 1732278673532, 1732274643578, 1732273271311, 1732272926834, 1732273037034, 1732483231660, 1730478260592, 1732631500067, 1732292222360, 1732272987649, 1732346911875, 1732272879719 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission670/Authors" ], [ "ICLR.cc/2025/Conference/Submission670/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission670/Authors" ], [ "ICLR.cc/2025/Conference/Submission670/Reviewer_gboG" ], [ "ICLR.cc/2025/Conference/Submission670/Authors" ], [ "ICLR.cc/2025/Conference/Submission670/Reviewer_DHMi" ], [ "ICLR.cc/2025/Conference/Submission670/Reviewer_VTMo" ], [ "ICLR.cc/2025/Conference/Submission670/Authors" ], [ "ICLR.cc/2025/Conference/Submission670/Area_Chair_ypw2" ], [ "ICLR.cc/2025/Conference/Submission670/Authors" ], [ "ICLR.cc/2025/Conference/Submission670/Reviewer_DHMi" ], [ "ICLR.cc/2025/Conference/Submission670/Authors" ], [ "ICLR.cc/2025/Conference/Submission670/Reviewer_zvri" ], [ "ICLR.cc/2025/Conference/Submission670/Authors" ], [ "ICLR.cc/2025/Conference/Submission670/Authors" ], [ "ICLR.cc/2025/Conference/Submission670/Area_Chair_ypw2" ], [ "ICLR.cc/2025/Conference/Submission670/Reviewer_VTMo" ], [ "ICLR.cc/2025/Conference/Submission670/Authors" ], [ "ICLR.cc/2025/Conference/Submission670/Authors" ], [ "ICLR.cc/2025/Conference/Submission670/Authors" ], [ "ICLR.cc/2025/Conference/Submission670/Authors" ], [ "ICLR.cc/2025/Conference/Submission670/Reviewer_gboG" ], [ "ICLR.cc/2025/Conference/Submission670/Reviewer_VTMo" ], [ "ICLR.cc/2025/Conference/Submission670/Authors" ], [ "ICLR.cc/2025/Conference/Submission670/Authors" ], [ "ICLR.cc/2025/Conference/Submission670/Authors" ], [ "ICLR.cc/2025/Conference/Submission670/Authors" ], [ "ICLR.cc/2025/Conference/Submission670/Authors" ] ], "structured_content_str": [ "{\"title\": \"Response - 1\", \"comment\": \"We sincerely appreciate the time and effort you have invested in reviewing our paper. We apologize for any confusion and would like to clarify our motivations and experimental settings.\\n\\n### Q1: About the Presentation of the Paper\\nThank you for your comments. We would like to share our thoughts on the \\u201cslide deck\\u201d presentation style, which aims to make the conclusions clearer, as **Reviewer DHMi** aptly summarized.\\nGiven the page limitations, we chose to highlight the main analyses and conclusions for each figure and finding. However, we recognize that this approach may impose a burden of understanding, especially for readers less familiar with the field.\", \"to_address_this\": \"- We will strengthen the connections between the figures and their corresponding conclusions in response to the questions raised. Preliminary connection instructions will be provided in #Q2.\\n- We apologize for the lack of detailed descriptions of the experiments and the process of deriving conclusions. In the revised version, we will reorganize the paper to include related work and preliminary knowledge in the main text.\\n\\n### Q2: About working mode of collaborative decoding\\n#### A1: Running Example\\nOur primary objective is to analyze the frequency of collaboration in various decoding settings.\\nIn our research, we explore collaborative decoding (CoDec) at all steps ($CoF=1$), for example:\\n```\\n[User]: \\\"Who is Donald Trump?\\\"\\n[Assistant]: \\\"Donald Trump is the former President of the United States, who is 78 years old now.\\\"\\n```\\n\\nFor a lower collaboration frequency ($CoF_{\\\\text{lower}}$), we input the outputs of CoDec into smaller models token by token to assess the consistency of top tokens. (CoDec represents speculative decoding ,contrastive decoding or proxy tuning)\\n```\\n- First token verification, match:\\n\\t- CoDec: [Assistant]: \\\"Donald\\\"\\n\\t- Small: [Assistant]: \\\"Donald\\\"\\n- Second token verification, match:\\n\\t- CoDec: [Assistant]: \\\"Donald Trump\\\"\\n\\t- Small: [Assistant]: \\\"Donald Trump\\\"\\n- ...\\n- 5th token verification, mismatch:\\n\\t- CoDec: [Assistant]: \\\"Donald Trump is the former President\\\"\\n\\t- Small: [Assistant]: \\\"Donald Trump is President\\\"\\n- ... (match)\\n- 14th token verification, mismatch:\\n\\t- CoDec: [Assistant]: \\\"Donald Trump is the former President of the United States, who is 78\\\"\\n\\t- Small: [Assistant]: \\\"Donald Trump is the former President of the United States, who is 80\\\"\\n- ... (match)\\n```\\n\\nAssuming there are three mismatched tokens (e.g., \\\"former\\\", \\\"78\\\"), the calculated $CoF_{\\\\text{lower}}=\\\\frac{2}{18}$. However, unnecessary collaborations may occur even when matches are identified, leading to an variable where $CoF_{\\\\text{lower}} \\\\leq CoF \\\\leq 1$. This motivates our investigation into the lower bounds of collaboration frequency, aiming to achieve similar outputs as full collaborative decoding with minimal collaborative steps. Our findings demonstrate this is a universal phenomenon across different collaborative decoding methods.\\n\\nSpeculative decoding currently selects a fixed number of tokens (K-tokens) for generation-verification, which does not effectively reach $CoF_{\\\\text{lower}}$. In contrast, methods such as contrastive decoding and proxy tuning entail collaborations at each step ($CoF=1$), which may not always be necessary.\\n\\n#### A2: Raw Accuracy Scores of Different Methods\\nWhile accuracy scores are not the primary focus of our experiments\\u2014where the outputs of mix-scaled models are treated as golden outputs\\u2014we provide raw accuracy scores for different collaborative methods to support the validity of our approach.\\n\\nThe results demonstrate the effectiveness of collaborative decoding, showing that mix-scaled models outperform small models operating independently. Furthermore, these findings underscore the potential to optimize collaboration efficiency based on the insights presented in our paper.\\n\\n- Table 1: Accuracy of Different Collaborative Decoding Methods on GSM8k\\n\\n| Model | Method | Qwen1.5-0.5B | Qwen1.5-1.8B | Qwen1.5-4B | Qwen1.5-7B |\\n| ------------ | ------ | ------------ | ------------ | ----------- | ----------- |\\n| Qwen1.5-0.5B | Self | 17.0 (Self) | - | - | - |\\n| Qwen1.5-1.8B | SD | 36.2 | 36.2 (Self) | - | - |\\n| | CD | 33.4 | \\\\ | - | - |\\n| | PT | 38.0 | \\\\ | - | - |\\n| Qwen1.5-4B | SD | 52.2 | 52.2 | 52.2 (Self) | - |\\n| | CD | 48.8 | 47.0 | \\\\ | - |\\n| | PT | 49.8 | 51.2 | \\\\ | - |\\n| Qwen1.5-7B | SD | 57.0 | 57.0 | 57.0 | 57.0 (Self) |\\n| | CD | 54.4 | 53.2 | 51.0 | \\\\ |\\n| | PT | 57.0 | 57.0 | 56.8 | \\\\ |\"}", "{\"title\": \"Response - 5\", \"comment\": \"### Q5: Greedy Decoding and Next-Token Perplexity for Collaboration Measuring\\n#### A1: Overview of Different Metrics\\nThis is an excellent question, and we appreciate the opportunity to address it. Below, we analyze the relationship between different metrics and provide comparative results.\\nAt each decoding step, we obtain logits over the vocabulary from the SLMs. These logits are normalized into the range $[0, 1]$ using the `softmax` function. This gives the next-token probabilities:\\n$$P(y_t) = \\\\text{softmax}(\\\\text{logits}_t)$$\\nDuring greedy decoding, the next token $y_t$ is selected as the one with the highest probability:\\n$$y_t = \\\\arg \\\\max_i^V P(y_t^i)$$\\nwhere $V$ is the size of the vocabulary.\", \"below_we_will_provide_an_analysis_of_the_next_token_entropy_and_perplexity\": [\"**Next token entropy**, $H(y_t)$, is defined as $H(y_t) = -\\\\sum_{i=1}^V P(y_t^i) \\\\log P(y_t^i)$, which quantifies the uncertainty of the model\\u2019s prediction at step t. Lower entropy indicates more confident predictions, while higher entropy suggests greater uncertainty in selecting the next token.\", \"**Perplexity**, when based on cross-entropy, requires access to the golden tokens (ground truth sequence). Since our routing analysis does not have access to golden tokens, we instead rely directly on entropy as a proxy for measuring uncertainty.\", \"**Next token perplexity** is defined as $\\\\text{Perplexity}(y_t) = 2^{H(y_t)}$, which transforms entropy into an interpretable measure representing the average branching factor of the model\\u2019s distribution over the next token. A lower perplexity implies a narrower, more confident distribution.\", \"**Sequence perplexity**, on the other hand, measures the uncertainty over the entire sequence and is defined as $\\\\text{Perplexity(sequence)} = 2^{-\\\\frac{1}{T} \\\\sum_{t=1}^T \\\\log P(y_t)}$, where T is the length of the sequence. Sequence perplexity can be seen as the geometric mean of next token perplexities across all decoding steps.\"], \"the_relationship_among_these_metrics_is_intrinsic\": \"next token logits determine next token entropy and perplexity through their normalized probabilities. Higher entropy and perplexity indicate a more uniform distribution, while lower values suggest peaked distributions. Sequence perplexity aggregates these effects over all steps, offering a global view of the model\\u2019s predictive confidence across the sequence.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"Thank you for your positive and encouraging feedback. In the original paper, we explored domains such as mathematics (GSM8k), code (MBPP), and general knowledge (MMLU). In the rebuttal, we expanded our analysis to include additional domains, such as medical knowledge (MedQA) and physical/chemical/biological sciences (GPQA). The results from these new domains consistently support our original findings.\\n\\nIn conclusion, our study covers a broad range of common domains, and we are enthusiastic about extending our approach to explore other relevant domains in future work. Thank you once again for your valuable feedback and for the opportunity to further refine our work.\"}", "{\"summary\": [\"The paper analyzes the patterns of collaboration between SLMs and LLMs when used in a collaborative decoding/training setup. By analyzing this behavior across multiple collaboration setups, tasks, and model families, the authors draw the following conclusions:\", \"The collaboration frequency peaks at about 20%, with the maximum collaboration happening when there's the biggest gap in the model sizes. In fact, there's an inverse scaling law connecting the model size ratio and the collaboration frequency (more clearly evident for Qwen models than Pythia).\", \"Most of the LLMs/System 2 interventions are required at the start of the decoding and for tokens for which SLMs are uncertain.\"], \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"Proposes a new framework to analyze the collaborative behavior between models\", \"Empirical results shed new light on this collaborative behavior. In particular, the scaling law for collaboration and frequent positions of collaboration are quite interesting.\"], \"weaknesses\": \"- The paper analyzes speculative decoding, contrastive decoding, and proxy tuning. Except for speculative decoding, it's not clear if the analysis provides any executable insights for the other two setups.\\nDrawing questionable analogies with human cognitive processes just because one model runs fast and the other slow and then commenting about how the collaborative distributions are different (L127-L129) is extremely flawed reasoning. The analogy doesn't make sense, except for the fact that one model is faster and the other is slower.\", \"comments_about_writing\": [\"Why is O_g being used and not O_f for p_f (fused logits) in Section 2.2\", \"L053: \\\"allow\\\" -> \\\"allows\\\"\", \"L195: \\\"produce\\\" -> \\\"produced\\\"\"], \"questions\": [\"It is not clear what exactly is being illustrated in Figures 11, 12, and 13. What are the different features?\", \"How does one use the insights from this paper for contrastive decoding and proxy tuning?\", \"Currently, greedy decoding is used to establish whether collaboration is required or not. I wonder if the next token perplexity could be another measure.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response - 2\", \"comment\": [\"### Q3: Discussion of the Figures (Results of Various Methods on Benchmarks)\", \"#### A1: Connections between Figures and Conclusions\", \"We provide detailed connections between the results of various methods on benchmarks presented in the figures and our final findings. Specifically, we conducted experiments on three methods (**speculative decoding, contrastive decoding, proxy tuning**) across three benchmarks (**GSM8k, MMLU, MBPP**) and two model series (**Qwen, Pythia**) to derive our four main findings.\", \"The results for these combinations sufficiently support each finding, as outlined below:\", \"**Finding 1: 20% Collaborations \\u2014 The 2:8 Law**\", \"**Figures 2 (Qwen) and 3 (Pythia):** These figures show that the average collaboration frequency (interface rate) across different tasks is consistently less than 20%. Among the methods, contrastive decoding achieves much lower collaboration frequencies than speculative decoding and proxy tuning.\", \"**Task Variability:** While collaboration frequency varies across tasks and model series due to differing model capabilities, the values remain approximately 20%.\", \"**Model Combinations:** Collaboration frequency decreases as the parameter ratio between large and small models decreases. This implies that models with similar capabilities require less frequent collaboration, which also contributes to **Finding 2**.\", \"**Finding 2: Parameters Scale Ratio Law**\", \"**Figure 4 (Qwen):** This figure presents the fitting line for the parameter scale ratio between large and small models, showing a strong correlation with lower collaboration frequencies. The fitting effect demonstrates the validity of our parameter scale ratio law.\", \"**Task and Method Variability:** While the quality of the fitting line varies by task and method, the conclusions remain consistent.\", \"Figure 5 (Pythia) highlights how the fitting effect is influenced by model underperformance. To strengthen this finding, we provide additional results from OpenELM, which exhibit low fitting errors and further support the effectiveness of our scale ratio law.\", \"**Finding 3: Well Begun is Half Done**\", \"**Figures 6 (Qwen) and 7 (Pythia):** These figures illustrate the mismatch rate across the generated sequence. They reveal that most mismatches occur at the **beginning** of the sequence. As generation progresses, SLMs increasingly align with mix-scaled models, leveraging the shared context.\", \"**Task Variability:** The percentage of mismatches in a sequence varies by task, influenced by task difficulty and model capability.\", \"**Finding 4: Last in Uncertain Tokens**\", \"**Figure 10:** This figure compares the logits of each token generated by SLMs to those of mix-scaled models, showing that mismatches predominantly occur in high-entropy positions (indicating high uncertainty).\", \"**Figures 11, 12, 13:** These figures analyze the top- token logits at each step, clustering tokens into match and mismatch categories. The results reveal a strong correlation between uncertainty (high-entropy positions) and matching labels.\", \"To ensure robustness, we compute average correlation scores across all tasks and methods. This finding identifies key positions for collaboration during decoding in SLMs, contributing to performance-cost optimization.\"]}", "{\"summary\": \"The paper explores collaborative decoding strategies between large language models (LLMs) and small language models (SLMs). The authors introduce the FS-GEN framework, categorizing LLMs as System 2 (slow and deliberate) and SLMs as System 1 (fast and intuitive). The research focuses on decoding methods like speculative decoding, contrastive decoding, and proxy tuning to improve efficiency and mitigate issues like high inference time.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"4\", \"strengths\": \"Originality:- The paper introduces a novel FS-GEN framework.\", \"quality\": \"- The tables and figures are very well used.\\nThe paper is written with a great clarity.\", \"significance\": [\"The paper compares from smaller models to larger ones, based on the number of parameters.\"], \"weaknesses\": \"Could provide more discussion of practical applications.\\nTrade-offs between the inference time and cost can be a great addition.\", \"the_experiments_focused_on_only_few_tasks_like\": [\"MMLU-STEM, GSM8k, and MBPP, Having experiments over domain specific datasets can give a better understanding.\"], \"questions\": \"How generalizable do you believe your findings are to other language tasks or domains?\\nHow do you think the collaborative patterns might change, If different sampling technique is used.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear authors,\\n\\nI see. I had understood from your answers that you agreed with my suggestions and were open to incorporating them, and I am sorry to hear that is not the case. I really don't intend to be mean about this, and I definitely see promise in your ideas, but I still believe the paper really needs substantial revisions to be ready for publication, and the *very* lengthy responses (with much information not contained in the paper), do not change my mind about this. I read the reviews of reviewer zvri and DHMi, who give higher scores, but their reviews are very short with not much info to go on. In other words, for me this paper is simply not ready for publication. I will not lower my score, but I will not make it higher either, I am sorry.\"}", "{\"title\": \"Response - 3\", \"comment\": \"### Q2: Discussion of the System 1 & System 2 Analogy\\nIn this work, we draw inspiration from the analogy of System 1 and System 2, simplifying their collaboration into **Fast and Slow thinking**. System 1 efficiently handles approximately 95% of routine tasks, while System 2 is reserved for deliberately addressing the remaining 5% of complex work [1]. Together, they demonstrate the power of **high-efficiency collaboration**.\\n\\nWe adopt this high-efficiency motivation to model the collaborative decoding methods between **fast and slow** (or **small and large**) models. Our experimental findings (Findings 1 and 2) show that small, fast models generate approximately 80% of tokens during the answering process, while large, slow models contribute the remaining 20%.\\n\\nLooking forward, we aim to expand these collaborative mechanisms to **reasoner and knowledger models**, such as OpenAI\\u2019s o1 and GPT-4, which not only embody the fast/slow model paradigm but also represent intuitive and deliberate thinking. Preliminary experiments reinforce our findings, indicating successful collaboration between o1 and existing large language models.\\n\\n\\n[1] Booch, Grady, et al. \\\"Thinking fast and slow in AI.\\\" Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 35. No. 17. 2021.\\n\\n\\n### Q3: Response to Comments about Writing\\nIn Section 2.2, we use\\u00a0 $O_g$\\u00a0 to represent the **golden** outputs generated by mix-scaled language models. We agree that the suggestion to use\\u00a0 $O_f$\\u00a0 would provide better consistency with the **fused** logits. We will update this notation to enhance clarity and improve the reader\\u2019s understanding.\\nAdditionally, we appreciate you pointing out the typographical errors, and we will correct them in the updated manuscript.\"}", "{\"title\": \"Please respond and update the score if necessary\", \"comment\": \"Dear Reviewers,\\n\\nKindly ensure that you respond proactively to the authors' replies so we can foster a productive discussion. If necessary, please update your score accordingly. We greatly appreciate the time and effort you\\u2019ve dedicated to the review process, and your contributions are key to making this process run smoothly.\\n\\nThank you, \\n\\nAC\"}", "{\"title\": \"Response - 3\", \"comment\": \"#### A2: Additional results of new tasks and models\\nTo further support our findings, we conducted additional experiments on new tasks (**GPQA, IFEval, MedQA**) and a new model series (**OpenELM**). The results validate the generalizability of our conclusions, as outlined below:\\n\\n- Table 2: Results of $CoF_{\\\\text{lower}}$\\u00a0 on Additional Domain Tasks\\n\\t- The results indicate that\\u00a0 $CoF_{\\\\text{lower}}$\\u00a0 is consistently below 20% across various methods, tasks, and model combinations. Furthermore, we observe a decreasing trend in\\u00a0 $CoF_{\\\\text{lower}}$\\u00a0 as the ratio of model parameters decreases.\\n\\t- We also found that the collaboration rate of general models on domain tasks is slightly higher than that on general tasks.\\n\\n| Task | GPQA | | | IFEval | | | MedQA | | |\\n| ------------------ | ----- | ----- | ----- | ------ | ----- | ----- | ----- | ----- | ----- |\\n| Method / CoF_lower | SD | CD | PT | SD | CD | PT | SD | CD | PT |\\n| Qwen1.5-0.5B w/ 7B | 0.162 | 0.211 | 0.157 | 0.208 | 0.298 | 0.2 | 0.23 | 0.296 | 0.225 |\\n| Qwen1.5-1.8B w/ 7B | 0.13 | 0.198 | 0.133 | 0.174 | 0.238 | 0.164 | 0.194 | 0.314 | 0.19 |\\n| Qwen1.5-4B w/ 7B | 0.099 | 0.155 | 0.098 | 0.149 | 0.221 | 0.145 | 0.169 | 0.308 | 0.165 |\\n\\n- Table 3: Line Fitting Results of OpenELM Models\\n\\t- Given the following formula of scale ratio law $$CoF_{\\\\text{lower}}=\\\\gamma \\\\cdot {R}^{-\\\\alpha}+\\\\beta$$where $R=\\\\frac{N_l}{N_s}$, we compute the coefficients with model parameters and collaboration frequency.\\n\\t- This table presents the line fitting results for oracle decoding, including fitting error, fitting coefficients, and the final\\u00a0 x\\u00a0(i.e., $R^{-\\\\alpha}$) and\\u00a0 y\\u00a0 values on the fitting curve.\\n\\t- The results demonstrate a strong fitting effect, confirming the generalizability of our findings across different model families.\\n\\t- These results further indicate that the performance of collaborative decoding is influenced by the underlying model\\u2019s performance.\\n\\n| Task | Formula & Fitting Error | Coordinates | 270M/450M | 450M/1.1B | 1.1B/3B | 270M/1.1B | 450M/3B | 270M/3B |\\n| --------- | ------------------------------------------- | ----------- | --------- | --------- | ------- | --------- | ------- | ------- |\\n| | | Ratio | \\u22481.67 | \\u22482.44 | \\u22482.73 | \\u22484.07 | \\u22486.67 | \\u224811.11 |\\n| GSM8k | $\\\\gamma=4.66, \\\\alpha=-0.0041, \\\\beta=4.67$ | X axis | 0.9979 | 0.9964 | 0.9959 | 0.9943 | 0.9923 | 0.9902 |\\n| | MSE Loss = 1.16e-6 | Y axis | 0.0250 | 0.0320 | 0.0320 | 0.0420 | 0.0490 | 0.0610 |\\n| MMLU-STEM | $\\\\gamma=5.02, \\\\alpha=-0.0022, \\\\beta=5.04$ | X axis | 0.9989 | 0.9981 | 0.9978 | 0.9969 | 0.9959 | 0.9948 |\\n| | MSE Loss = 2.25e-6 | Y axis | 0.0280 | 0.0300 | 0.0350 | 0.0350 | 0.0420 | 0.0490 |\\n| MBPP | $\\\\gamma=23.83, \\\\alpha=-0.0007, \\\\beta=23.84$ | X axis | 0.9996 | 0.9994 | 0.9993 | 0.9990 | 0.9987 | 0.9983 |\\n| | MSE Loss = 7.04e-5 | Y axis | 0.0220 | 0.0170 | 0.0380 | 0.0200 | 0.0480 | 0.0500 |\"}", "{\"comment\": \"The questions are clearly explained and clarified, for the 2nd question, I wanted to know if you have tried for other domains which were not included in the paper. But overall I am satisfied with the work.\"}", "{\"title\": \"Response - 4\", \"comment\": \"### Q4: Additional Illustration in Figures 11, 12, and 13\\nIn Figures 11, 12, and 13, we visualize the logits of the top 1 and top 5 tokens in the vocabulary of small models at each generation step. These logits are categorized into two distinct clusters:\\n- **Matched tokens:** Tokens where the small model\\u2019s predictions align with those of the mix-scale model.\\n- **Mismatched tokens:** Tokens where the small model\\u2019s predictions diverge from those of the mix-scale model.\\n\\nThe visualization highlights that these clusters are separable, which supports our conclusion that **dynamic routing** can be implemented. Specifically, the uncertainty of token decoding in SLMs can guide the decision of whether to engage collaboration with larger models on a token-by-token basis.\", \"we_utilize_the_following_metrics_to_evaluate_the_correlation_between_matched_and_mismatched_token_logits\": [\"**Silhouette Coefficient (SC)**\", \"This metric (range: -1 to 1) assesses clustering quality by comparing intra-cluster cohesion and inter-cluster separation. Values > 0.5 indicate strong clustering performance.\", \"A high SC value derived from Pearson or Spearman correlation demonstrates that the metric aligns well with the data.\", \"**Davies-Bouldin Index (DBI)**\", \"The DBI (range: $[0, \\u221e)$) measures clustering compactness and separation, where lower values (<1) suggest better clustering quality.\", \"A low DBI derived from correlation methods indicates effective uncertainty estimation.\", \"**Mean Cluster Center Distance (MCCD)**\", \"MCCD measures the separation between cluster centers, with larger values indicating better distinction. Correlation methods that amplify these distances demonstrate their alignment with the data.\", \"Table 3: Correlation Between Match/Mismatch Tokens and Top-K Token Logits of SLMs\", \"Our results demonstrate the effectiveness of uncertainty estimation:\", \"SC values are consistently close to 0.5.\", \"DBI values are below 1, indicating compact and well-separated clusters.\", \"MCCD values range between 10\\u201320, reflecting robust inter-cluster distinction.\", \"An exception is observed with Pythia series models, likely due to their insufficient pretraining.\", \"| Models | Metric | GSM8k | | MMLU | | MBPP | |\", \"| ------- | ------ | -------- | ------- | -------- | ------- | -------- | ------- |\", \"| | | 5 tokens | 1 token | 5 tokens | 1 token | 5 tokens | 1 token |\", \"| Qwen1.5 | SC | 0.465 | 0.503 | 0.445 | 0.457 | 0.47 | 0.469 |\", \"| | DBI | 0.806 | 0.805 | 0.838 | 0.917 | 0.772 | 0.909 |\", \"| | MCCD | 7.533 | 18.176 | 11.036 | 15.64 | 13.431 | 16.156 |\", \"| Pythia | SC | 0.465 | 0.358 | 0.485 | 0.286 | 0.464 | 0.315 |\", \"| | DBI | 0.79 | 1.18 | 0.755 | 1.416 | 0.779 | 1.3 |\", \"| | MCCD | 21.584 | 14.125 | 22.584 | 16.289 | 21.325 | 16.843 |\"]}", "{\"summary\": \"The paper studies collaborative decoding, where small language models and large language models work together in the decoding process. In particular, the paper offers a unifying perspective on 3 different collaborative decoding techniques: proxy tuning, speculative decoding and contrastive decoding. Authors categorize the larger model as System 2 and smaller model as system 1.\\nThe paper studies the 3 techniques, their commonalities and differences through their framework FS-GEN (Fast and Slow Generating).\\nThey find that only small fraction of decoding steps require collaboration and that System 1 and 2 follow a scaling law related to parameter ratios.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"Paper studies a relatively under explored but important and emerging area of research.\\nThe findings are interesting, particularly the 2:8 law, collaborations being most necessary at the beginning of decoding and that high uncertainty tokens within System 1 are more likely to require collaboration.\\nSome of the findings could spur targeted research in the field of collaborative decoding.\\nExperimental benchmarks cover different capabilities like knowledge, math and coding, as well as two LLM families.\", \"weaknesses\": \"The System 1 and System 2 analogy is not well fleshed out, to the point where it feels more like a distraction from the main contributions.\\n\\nThe line fits on the param ratio scaling plot aren't very convincing.\\n\\nThe uncertainty analysis is only qualitative - quantitative metrics to support this hypothesis (covering different tasks and model families) are missing. Without them its hard to have confidence in this finding.\", \"questions\": \"Related work is pushed to the Appendix. This is a strange choice. I understand there might have been a space crunch, but Related Work makes much more sense to be in the main paper.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your thoughtful comments and for taking the time to review our work in detail. We would like to address potential misunderstandings in our previous response and clarify key points from the paper.\\n\\nOur motivation for **including additional experiments was to provide broader insights and explore the broader impact of our findings**. We apologize if this has added to the burden of the rebuttal process. However, we want to emphasize that these experiments are not intended to suggest a lack of detail or obvious ablations in our core contributions.\\n\\nThe central focus of our paper lies in the findings derived from **three methods applied to three datasets across two model series**, which we believe are sufficiently self-contained. **The additional experiments discussed in the rebuttal serve to extend and contextualize these findings, offering new avenues for future work.** To clarify:\\n- **Q1** provides executable insights and new results on contrastive decoding and proxy tuning. Similarly, **Q5** includes additional experiments on router optimization, extending **Finding 4** and building on the discussion in Section 5 (\\\"Cost-Aware Collaboration Optimization\\\").\\n- **Q4** relates to the source data for visualizations in Figures 11 and 12, which were excluded from the main paper due to space limitations. These do not represent new experiments but rather provide supplementary information.\\n- **Q2** offers detailed responses and further elaboration on the motivation discussed in the Introduction (Lines 84\\u201398).\\n\\n### Use of the Term \\\"Logit\\\"\\nWe acknowledge that the term \\u201clogit\\u201d may have caused some confusion. Our intent was to compare the top-1 token selected by the SLM and the SLM+LLM models under greedy decoding. While the final token is obtained using $\\\\text{argmax}(\\\\text{logits})$, this is effectively equivalent to using probabilities after softmax. \\nFor contrastive decoding, we primarily refer to the implementation in [1]. We note that Section 2.2 contains an incorrect citation for this reference, which we will correct. This approach uses unnormalized scores (logits) directly, as assigned by the amateur and expert models. \\n### Description of Thresholds and Accuracy\\nOur exploration of token uncertainty, as mentioned in Section 4.2.2, aligns with the discussion in Section 4 (\\\"Cost-Aware Collaboration Optimization\\\"). Here, we cite [2] to recommend dynamic collaboration (also definition of threshold) based on heuristic rules. The additional experiments in this area were conducted to further explore the broader implications of these findings.\\n### Reducing Emphasis on Cognitive Science Analogies\\nWhile we used the \\\"System 1 and System 2\\\" framework to illustrate fast and slow behaviors, our primary focus was on their high-efficiency collaboration, which is central to our work. This approach aligns with prior research, such as [3], but we will revise these descriptions in the paper to ensure clarity and focus.\\n\\n------\\n\\nOnce again, we sincerely thank you for your detailed and constructive feedback. While we will revise our paper to address the noted areas of potential confusion, we respectfully maintain that there are no substantial changes to our main experiments or findings.\\n\\nWe welcome further discussion on these concerns and are committed to refining our work in response to your valuable suggestions. Thank you again for your thoughtful review and insights.\\n\\n[1] O'Brien, Sean, and Mike Lewis. \\\"Contrastive decoding improves reasoning in large language models.\\\"\\u00a0_arXiv preprint arXiv:2309.09117_\\u00a0(2023).\\n\\n[2] Kim, Sehoon, et al. \\\"Speculative decoding with big little decoder.\\\"\\u00a0_Advances in Neural Information Processing Systems_\\u00a036 (2024).\\n\\n[3] Lin, Bill Yuchen, et al. \\\"Swiftsage: A generative agent with fast and slow thinking for complex interactive tasks.\\\"\\u00a0_Advances in Neural Information Processing Systems_\\u00a036 (2024).\"}", "{\"title\": \"Response - 1\", \"comment\": \"We sincerely appreciate your positive feedback and the time and effort you have dedicated to reviewing our paper. Below, we provide further illustration and additional results to address your questions.\\n### Q1: Executable Insights for Contrastive Decoding and Proxy Tuning\\n#### A1: Overview of Executable Framework\\nBuilding on the insights from our findings, we propose a direct approach to optimizing the inference cost for both **Contrastive Decoding (CD)** and **Proxy Tuning (PT)**. Previous work on CD and PT typically involves collaboration across **all tokens** during text generation. However, our results suggest that this is unnecessary, as efficient collaboration can be achieved by focusing only on **specific tokens**.\", \"in_this_optimized_framework\": [\"**Small models serve as the main backbone** in CD and PT. They are tasked with generating the majority of the content during text generation.\", \"**Token-level collaboration is determined dynamically** based on the logits distribution. Specifically, we identify whether a token requires collaboration from large models by analyzing the features of the match and mismatch logits between small models and mixed-scale models.\", \"To implement this, we can **train a lightweight token-level router** that leverages these logits features. The router determines when collaboration with a larger model is necessary, effectively balancing performance and efficiency.\"]}", "{\"metareview\": \"The paper examines collaborative decoding, where small and large language models work together during the decoding process. It unifies three techniques: proxy tuning, speculative decoding, and contrastive decoding, framing them through the FS-GEN (Fast and Slow Generating) framework. The larger model is characterized as System 2, which operates slowly and deliberately, while the smaller model is System 1, functioning quickly and intuitively. The study finds that only a small fraction of decoding steps require collaboration and identifies a scaling law related to parameter ratios. Using the Qwen and Pythia series, evaluated across datasets like MMLU-STEM, GSM8k, and MBPP, the research highlights that collaborative interactions are most critical at the start of the generation process, with an optimal interaction frequency around an 80-20 ratio, depending on the task.\\n\\nMy decision is to reject the paper, as it requires substantial revisions, particularly in areas highlighted by Reviewer VTMo. Additionally, the authors need to clarify the terminology used throughout the paper.\", \"additional_comments_on_reviewer_discussion\": \"Reviewer gboG identified a significant oversight in the paper regarding the use of \\\"logits,\\\" which should correctly refer to unnormalized scores. The proper term, as used in the contrastive decoding paper, is \\\"log-probability,\\\" leading to gboG lowering their score. Reviewer DHMi also reduced their score from 8 to 6, although this score is notably an outlier compared to other review scores. Reviewer VTMo stated that the paper needs substantial revisions to be both understandable and assessable in terms of its merits. The paper lacks detailed experimental information, and its conclusions are inadequately explained. Additionally, the authors have not addressed these issues.\"}", "{\"comment\": \"Dear authors,\\n\\nThank you very much for your explanations. I remain with my earlier comment that the questions asked in your article are interesting and that there are likely many valuable results in your paper. I appreciate that you, among other things, acknowledge in your response that it is necessary to strengthen the connections between the figures in the corresponding conclusions (and give some of those explanations in your rebuttal), that you agree that experimental details are needed for readers to validate your experiments and that the paper could benefit from some more explicit reasoning an how the conclusions were arrived. I believe that doing all these changes would drastically improve your paper (though I would have to take another few hours to re-review to make sure that the conclusions make sense given the added experimental details, descriptions, motivation and reasoning). I also think that these changes would be quite substantial (as confirmed also by the length of your response) and -- as I said in the line before -- reviewing them would take almost as much time as reviewing the paper in the first place. For me, this is beyond the expectations of a rebuttal phase and I will thus stick with my recommendation to reject your work. Of course, it is just a one man's opinion, and I want also to acknowledge that I do think your paper has promise and carries some interesting ideas. The other reviewers appear to be more positive about this work than myself, so perhaps the AC will just overrule my specific opinion :). I hope in any case that my comments were useful to improve your work.\"}", "{\"title\": \"Response - 2\", \"comment\": \"#### A2: Preliminary Executable Results\\nWe present preliminary results for CD and PT, demonstrating their potential to optimize speed-performance trade-offs. Specifically, instead of conducting CD and PT on all tokens during text generation by small models, we focus these collaborations on a subset of mismatch tokens. By collaborating on these uncertain tokens alone, we achieve performance comparable to previous approaches that rely on collaborations for all tokens.\\n\\n- Table 1: Routing with Top-1 Token Logits of SLM for Contrastive Decoding\\n\\t- At each decoding step of the SLM, we determine whether to involve LLM collaboration based on the top-1 token logits of the SLM. A routing ratio of 0.0% implies decoding exclusively with the SLM, while a ratio of 100% indicates collaboration between the SLM and LLM at all steps.\\n\\t- Our results show that we can significantly reduce inference cost while maintaining comparable performance. Interestingly, when the performance gap between the SLM and LLM is small (e.g., 4B vs. 7B models), the performance improvement becomes less pronounced.\\n\\n| Contrastive Dec | Threshold | 0.05 | 0.1 | 0.2 | 0.3 | 0.4 | 0.5 | 0.6 | 0.8 | 1.0 |\\n| ------------------ | --------- | ----- | ----- | ----- | ----- | ----- | ------ | ------ | ------ | ---- |\\n| Qwen1.5-0.5B w/ 7B | Ratio | 0.0% | 0.0% | 0.3 % | 2.3 % | 5.6 % | 11.1 % | 17.2 % | 29.9 % | 100% |\\n| | Accuracy | 17.0 | 17.0 | 17.0 | 18.0 | 25.4 | 31.0 | 34.8 | 48.2 | 54.4 |\\n| Qwen1.5-1.8B w/ 7B | Ratio | 0.0 | 0.0 | 0.2 % | 1.4 % | 4.1 % | 8.4 % | 13.9 % | 25.4 % | 100% |\\n| | Accuracy | 36.2% | 36.2% | 38.8 | 38.2 | 37.2 | 41.0 | 43.4 | 49.4 | 53.2 |\\n| Qwen1.5-4B w/ 7B | Ratio | 0.0 | 0.0 | 0.2 % | 1.3 % | 3.8 % | 8 % | 13.2 % | 24.6 % | 100% |\\n| | Accuracy | 52.2 | 52.2 | 52.6 | 51.8 | 53.2 | 51.0 | 51.4 | 51.0 | 51.0 |\\n- Table 2: Routing with Top-1 Token Logits of SLM for Proxy Tuning\\n\\t- The results for Proxy Tuning exhibit a similar trend to Contrastive Decoding. Here, a routing ratio of 0.0% denotes generation exclusively using small tuned models, while a ratio of 100% indicates generation involving both small tuned models and large base models.\\n\\n| Proxy Tuning | Threshold | 0.05 | 0.1 | 0.2 | 0.3 | 0.4 | 0.5 | 0.6 | 0.8 | 1.0 |\\n| :----------------- | :-------- | ---- | ---- | ----- | ----- | ----- | ------ | ------ | ------ | ---- |\\n| Qwen1.5-0.5B w/ 7B | Ratio | 0.0% | 0.0% | 0.3 % | 1.9 % | 5.4 % | 10.3 % | 16.4 % | 28.9 % | 100% |\\n| | Accuracy | 9.6 | 9.8 | 11.0 | 11.8 | 17.8 | 20.2 | 25.6 | 38.4 | 57.0 |\\n| Qwen1.5-1.8B w/ 7B | Ratio | 0.0% | 0.0% | 0.1 % | 1 % | 2.9 % | 6.4 % | 11.3 % | 21.9 % | 100% |\\n| | Accuracy | 33.0 | 33.0 | 33.0 | 35.4 | 39.0 | 37.8 | 44.0 | 50.2 | 57.0 |\\n| Qwen1.5-4B w/ 7B | Ratio | 0.0% | 0.0% | 0.1 % | 1 % | 3.1 % | 6.5 % | 11.5 % | 21.7 % | 100% |\\n| | Accuracy | 45.6 | 45.6 | 45.0 | 46.6 | 48.8 | 51.0 | 52.0 | 53.4 | 56.8 |\\n\\nAdditionally, these findings align with recent advancements in test-time compute scaling applications. The token-level uncertainty analysis in our work can also be applied to entropy-based decoding methods like Entropix, where high-entropy tokens can be handled similarly to uncertain tokens in small language models.\\n\\nIn our experiments, we used a simple routing mechanism based on the top-1 token logits threshold. However, this can be extended by training a more sophisticated router with richer features during decoding, which has the potential to further improve the efficiency and effectiveness of collaborative decoding.\"}", "{\"title\": \"Response - 4\", \"comment\": \"### Q4: Motivation and Analogy of System 1 and 2 with Collaborative Decoding\\n\\nIn this work, we draw inspiration from the analogy of System 1 and System 2, simplifying their collaboration into **Fast and Slow thinking**. System 1 efficiently handles approximately 95% of routine tasks, while System 2 is reserved for deliberately addressing the remaining 5% of complex work [1]. Together, they demonstrate the power of **high-efficiency collaboration**.\\n\\nWe adopt this high-efficiency motivation to model the collaborative decoding methods between **fast and slow** (or **small and large**) models. Our experimental findings (Findings 1 and 2) show that small, fast models generate approximately 80% of tokens during the answering process, while large, slow models contribute the remaining 20%.\\n\\nLooking forward, we aim to expand these collaborative mechanisms to **reasoner and knowledger models**, such as OpenAI\\u2019s o1 and GPT-4, which not only embody the fast/slow model paradigm but also represent intuitive and deliberate thinking. Preliminary experiments reinforce our findings, indicating successful collaboration between o1 and existing large language models.\\n\\n[1] Booch, Grady, et al. \\\"Thinking fast and slow in AI.\\\" Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 35. No. 17. 2021.\\n\\n### Q5: Position of Related Works in the Paper\\n\\nWe appreciate your feedback regarding the placement of related works. To improve clarity, we will reorganize the structure of the paper. Specifically:\\n- **Revised structure:** We will simplify the presentation of the main text to enhance readability.\\n- **Related works:** We will introduce related work briefly in the main text, ensuring it is more integrated and accessible.\\n\\nWe hope these responses address your concerns and provide further clarity. Thank you once again for your constructive feedback and valuable suggestions, which have been instrumental in improving our work.\"}", "{\"title\": \"Response - 2\", \"comment\": \"### Q2: Experiments on Domain-Specific Datasets\\nTo validate the robustness of our findings, we conducted additional experiments on **GPQA**, **MedQA**, and **IFEval**, which encompass biology, medical, and physics question-answering tasks, as well as instruction-following tasks in open-domain.\\n\\n- Table 3: Results of $CoF_{\\\\text{lower}}$\\u00a0 on Additional Domain Tasks\\n\\t- The results indicate that\\u00a0 $CoF_{\\\\text{lower}}$\\u00a0 is consistently below 20% across various methods, tasks, and model combinations. Furthermore, we observe a decreasing trend in\\u00a0 $CoF_{\\\\text{lower}}$\\u00a0 as the ratio of model parameters decreases.\\n\\t- We also found that the collaboration rate of general models on domain tasks is slightly higher than that on general tasks.\\n\\n| Task | GPQA | | | IFEval | | | MedQA | | |\\n| ------------------ | ----- | ----- | ----- | ------ | ----- | ----- | ----- | ----- | ----- |\\n| Method / CoF_lower | SD | CD | PT | SD | CD | PT | SD | CD | PT |\\n| Qwen1.5-0.5B w/ 7B | 0.162 | 0.211 | 0.157 | 0.208 | 0.298 | 0.2 | 0.23 | 0.296 | 0.225 |\\n| Qwen1.5-1.8B w/ 7B | 0.13 | 0.198 | 0.133 | 0.174 | 0.238 | 0.164 | 0.194 | 0.314 | 0.19 |\\n| Qwen1.5-4B w/ 7B | 0.099 | 0.155 | 0.098 | 0.149 | 0.221 | 0.145 | 0.169 | 0.308 | 0.165 |\\n\\nWhen extending model collaborations from generalist to specialist tasks, we anticipate that the collaboration frequency will decrease due to the narrower distribution of domain-specific terminology. However, the lack of a comprehensive range of specialized model series limits further analysis at this stage, and we leave this exploration as future work.\\n\\n### Q3: Different Sampling Techniques\\nIn our current work, we use **greedy decoding** to compute the matching rate of tokens between small and large language models. This choice aligns with our initial motivation of achieving collaborative decoding with minimal intervention in small models, treating the collaborative decoding results as golden tokens.\\n\\nFor scenarios where exact matching is less critical and the focus shifts to performance-speed optimization, other sampling techniques can be explored. These techniques might yield better performance with reduced collaboration frequency, leading to more efficient collaborations. However, quantifying results becomes more challenging due to the increased uncertainty introduced by sampling.\\nWe believe this is an exciting direction for future research, as it opens up possibilities for balancing efficiency and performance through alternative decoding strategies.\"}", "{\"title\": \"Response - 2\", \"comment\": [\"### Q3: Quantitative Metrics for Uncertainty Analysis\", \"To strengthen the evidence supporting our uncertainty analysis, we provide additional quantitative results, generalizing across all model combinations and methods. We utilize the following metrics to evaluate the correlation between matched and mismatched token logits:\", \"**Silhouette Coefficient (SC)**\", \"This metric (range: -1 to 1) assesses clustering quality by comparing intra-cluster cohesion and inter-cluster separation. Values > 0.5 indicate strong clustering performance.\", \"A high SC value derived from Pearson or Spearman correlation demonstrates that the metric aligns well with the data.\", \"**Davies-Bouldin Index (DBI)**\", \"The DBI (range: $[0, \\u221e)$) measures clustering compactness and separation, where lower values (<1) suggest better clustering quality.\", \"A low DBI derived from correlation methods indicates effective uncertainty estimation.\", \"**Mean Cluster Center Distance (MCCD)**\", \"MCCD measures the separation between cluster centers, with larger values indicating better distinction. Correlation methods that amplify these distances demonstrate their alignment with the data.\", \"Table 2: Correlation Between Match/Mismatch Tokens and Top-K Token Logits of SLMs\", \"Our results demonstrate the effectiveness of uncertainty estimation:\", \"SC values are consistently close to 0.5.\", \"DBI values are below 1, indicating compact and well-separated clusters.\", \"MCCD values range between 10\\u201320, reflecting robust inter-cluster distinction.\", \"An exception is observed with Pythia series models, likely due to their insufficient pretraining.\", \"| Models | Metric | GSM8k | | MMLU | | MBPP | |\", \"| ------- | ------ | -------- | ------- | -------- | ------- | -------- | ------- |\", \"| | | 5 tokens | 1 token | 5 tokens | 1 token | 5 tokens | 1 token |\", \"| Qwen1.5 | SC | 0.465 | 0.503 | 0.445 | 0.457 | 0.47 | 0.469 |\", \"| | DBI | 0.806 | 0.805 | 0.838 | 0.917 | 0.772 | 0.909 |\", \"| | MCCD | 7.533 | 18.176 | 11.036 | 15.64 | 13.431 | 16.156 |\", \"| Pythia | SC | 0.465 | 0.358 | 0.485 | 0.286 | 0.464 | 0.315 |\", \"| | DBI | 0.79 | 1.18 | 0.755 | 1.416 | 0.779 | 1.3 |\", \"| | MCCD | 21.584 | 14.125 | 22.584 | 16.289 | 21.325 | 16.843 |\", \"### Q4: Position of Related Works in the Paper\", \"We appreciate your feedback regarding the placement of related works. To improve clarity, we will reorganize the structure of the paper. Specifically:\", \"**Revised structure:** We will simplify the presentation of the main text to enhance readability.\", \"**Related works:** We will introduce related work briefly in the main text, ensuring it is more integrated and accessible.\", \"We hope these responses address your concerns and provide further clarity. Thank you once again for your constructive feedback and valuable suggestions, which have been instrumental in improving our work.\"]}", "{\"comment\": \"Thanks for sharing these insights. While I appreciate sharing these additional insights during rebuttal, I agree with reviewer VTMo that the need for so many new experiments means that the original paper lacked details and some obvious ablations. Even with the rebuttal, in the Response-2 I have no idea what the threshold is and what is the accuracy? While these can be clarified again in the rebuttal phase, this constant back-and-forth is a sign that the authors are not careful while sharing the results.\\n\\nFinally, on a technical note, I realized a big mistake that I had missed earlier in my reading. Logits refer to the unnormalized scores. So the use of logit throughout the paper is technically wrong! Even in the contrastive decoding paper, they use log-probability instead of logit.\\n\\n**Overall, I'm leaning negative now and have reduced score by a point. The paper needs substantial revision, and it would be better if the authors just resubmitted it to another venue because there are too many holes in the current manuscript.** I would also suggest reducing the emphasis on cognitive science analogies, especially when the two comparables bear little resemblance.\"}", "{\"summary\": \"This paper presents an investigation into collaborative decoding between small and large language models, attempting to formalize it from the perspective of a system 1 / system 2 collaboration, where system 1 operates quicly and intuitively, while system 2 functions in a more slow and deliberate manner. The paper focuses on the differences between system 1 and 2 in the context of decoding, when system 1 would underperform compared to system 2 and how efficiency of the compound system can be improved. For their investigation, the authors use the Qwen and Pythia series. To evaluate the system, they consider MMLU-STEM, GSM8k and MBPP. The analysis focusses on two aspects of collaboration: frequency and position, where the former refers to how often the models should interact, where as the second one refers to the specific points of interaction. They find thta collaborative interactions are most critical at the beginning of the generation, and that the optimal frequency is around\\n 80-20, depending on the task.\", \"soundness\": \"1\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"The paper asks an interesting question and presents several findings. The idea to take inspiration from system 1 and system 2 is interesting.\", \"weaknesses\": \"My main qualm with the work is the presentation of the paper, which almost reads like a slide deck: plenty of conclusions and graphics, but little to no details about how the experiments are actually set up or how the conclusions are drawn. I also don't see any evidence of how well the collaborative decoding actually works (that is, there are no accuracy scores reported), and how that may depend on the frequency or place of collaboration). The many figures are hardly described. There is also no discussion of how the results are different between the benchmarks and whether that may make sense given the topics.\\n\\nLastly, while I like the idea of interpreting collaborative decoding as a system-1 system-2 scenario, but the current work does not really convince me that it makes sense to explore collaborative decoding with SLMs and LLMs in this way. Wouldn't LLMs be better both at the intuition and the deliberate reasoning?\\n\\nIn sum, it could be that the paper contains many interesting results, but if so, the current presentation does not do them justice.\", \"nb\": \"the related work section is in the appendix and is not even referred to\", \"questions\": \"Could you elaborate on the motivation of using system 1 - system 2 reasoning for collaborative decoding with SLMs and LLMs, specifically?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": [\"Dear Reviewers,\", \"Thank you for your comments on our paper. We have carefully revised the manuscript to address your concerns, incorporating additional explanations, analyses, and clarifications as needed. For your convenience, all new content in the revised manuscript is highlighted in blue. Below, we summarize the changes made in response to your comments:\", \"### Descriptive Questions\", \"**Update to the analogy of System 1 and System 2 (@Reviewers zvri, gboG, VTMo):** We have reduced references to human cognition and instead emphasized the efficient collaboration between fast and slow systems. These updates are reflected in Figure 1 and the Introduction (Section 1).\", \"**Discussion on different sampling techniques (@Reviewers DHMi, gboG):** A new discussion of additional sampling techniques is provided in Appendix D.\", \"**Explanation of features in Figures 11, 12, and 13 (@Reviewer gboG):** We have added further analysis of these features in Section 6 (Discussion).\", \"**Simplified related works section (@Reviewers zvri, VTMo):** A concise version of the related works previously in the appendix has been rewritten and moved to Section 2 (Related Works).\", \"**Discussion of differences in datasets and models (@Reviewer VTMo):** We now discuss the impact of dataset size and model performance in Section 6 (Discussion).\", \"**Additional running example for experimental settings and reproducibility (@Reviewer VTMo):** A detailed example, focusing on computing collaboration frequency, is included in Appendix C (Table 2). The implementation code is provided in an anonymous repository.\", \"**Explanation of outputs and logits (@Reviewer gboG):** We have corrected typos, resolved citation errors, and provided further explanations regarding logits in Appendix D.\", \"### Experimental Questions\", \"**Discussion on domain-specific datasets (@Reviewer DHMi):** We have added results on collaboration frequency for MedQA, GPQA, and IFEval datasets in Appendix E1 (Table 3).\", \"**Discussion on practical and executable applications (@Reviewers DHMi, gboG):** We provided additional results on token-based routing using SLM logits to improve quality-efficiency trade-offs. These updates are included in Section 6 (Discussion, Figure 11) and Appendix F.2 (Figures 17, 18).\", \"**Further analysis of parameter ratio scaling effects (@Reviewer zvri):** We analyzed cases of poor fitting and updated more results for OpenELMs in Section 5.1.2 (Figure 5) and Appendix E1.\", \"**Quantitative metrics for uncertainty analysis (@Reviewer zvri):** We included corresponding quantitative metrics for Figures 10 and 11 in Table 4 and provided a detailed explanation of the correlations in Appendix F.1.\", \"----\", \"Our core findings remain unchanged, but we have clarified key points, validated our results on domain-specific datasets, and supplemented our discussion on the practical application of our empirical results. We hope these updates sufficiently address your concerns. Thank you for your time and continued consideration.\"]}", "{\"title\": \"Response - 6\", \"comment\": \"#### A2: Additional Results on Different Metrics\\nTo further evaluate the effectiveness of various metrics for routing from SLMs to mix-scaled models, we conducted additional analysis. Building on the executable insights provided in Q1, which demonstrated the effectiveness of routing and quantitative uncertainty scores using clustering metrics, we extended our investigation to entropy and perplexity metrics.\\n\\nWe analyzed the correlation between token matching/mismatching and entropy/perplexity scores, expanding on Findings 4 presented in Figures 11, 12, and 13.\\n\\n- Table 4: Correlation Between Match/Mismatch Tokens and Entropy/Perplexity Scores of SLMs (Qwen series)\\n\\t- The results presented in Table 4 reveal trends similar to those observed in the previous analysis in **Q3**. Furthermore, entropy and perplexity metrics perform better on recognizing mismatched tokens.\\n\\t- This consistency underscores the effectiveness of entropy and perplexity metrics, demonstrating that they serve a similar role to the top- k\\u00a0 token logits in identifying match and mismatch tokens.\\n\\t- These findings align closely with our analysis in **A1**, further validating the utility of entropy and perplexity as reliable metrics for guiding collaborative decoding decisions.\\n|Task|GPQA|||IFEval|||MedQA|||\\n|---|---|---|---|---|---|---|---|---|---|\\n|Feature/Metric|SC|DBI|MCCD|SC|DBI|MCCD|SC|DBI|MCCD|\\n|Top logits of 1 token|0.572|0.668|13.182|0.317|1.22|117.359|0.308|1.261|123.797|\\n|Top logits of 5 tokens|0.45|0.896|5.86|0.26|1.473|107.906|0.216|1.677|105.803|\\n|Token Entropy|0.742|0.456|2.667|0.624|0.536|3.662|0.632|0.563|2.445|\\n|Token PPL|0.838|0.504|3.261|0.767|0.53|5.768|0.775|0.553|2.986|\\n|Context PPL|0.934|0.325|2.276|0.588|0.61|0.603|0.569|0.597|0.264|\\n\\nOur results confirm the feasibility of implementing routing using entropy and perplexity scores, aligning with recent developments in entropy-based decoding projects, such as **Entropix** [1]. We believe our findings provide valuable insights and can further advance research in this area.\\n\\n[1] https://github.com/xjdr-alt/entropix\\n\\n-----\\n\\nThank you again for your feedback and queries. We welcome any further discussion to address potential misunderstandings or to clarify our results.\"}", "{\"title\": \"Response - 1\", \"comment\": \"We sincerely appreciate your positive feedback, along with the time and effort you have put into reviewing our paper.\\nWe would like to give our thoughts and new results to address your concerns.\\n\\n### Q1: Relationship Between Main Contributions and the System 1 & System 2 Analogy\\nIn this work, we draw inspiration from the analogy of System 1 and System 2, simplifying their collaboration into **Fast and Slow thinking**. System 1 efficiently handles approximately 95% of routine tasks, while System 2 is reserved for deliberately addressing the remaining 5% of complex work [1]. Together, they demonstrate the power of **high-efficiency collaboration**.\\n\\nWe adopt this high-efficiency motivation to model the collaborative decoding methods between **fast and slow** (or **small and large**) models. Our experimental findings (Findings 1 and 2) show that small, fast models generate approximately 80% of tokens during the answering process, while large, slow models contribute the remaining 20%.\\n\\nLooking forward, we aim to expand these collaborative mechanisms to **reasoner and knowledger models**, such as OpenAI\\u2019s o1 and GPT-4, which not only embody the fast/slow model paradigm but also represent intuitive and deliberate thinking. Preliminary experiments reinforce our findings, indicating successful collaboration between o1 and existing large language models.\\n\\n\\n[1] Booch, Grady, et al. \\\"Thinking fast and slow in AI.\\\" Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 35. No. 17. 2021.\\n\\n\\n### Q2: Line Fitting on the Parameter Ratio Scaling\", \"the_line_fitting_in_figures_4_and_5_is_influenced_by_both_data_size_and_model_performance\": \"- **Data Size:** Due to computational limitations, we sampled only ~500 data points for each task. This sampling constraint may contribute to fluctuations in the observed curve.\\n- **Model Performance:** Parameter ratio scaling laws are significantly affected by model performance. While Qwen series models maintain consistent performance, Pythia models underperform due to insufficient pretraining, thereby affecting the collaboration dynamics between large and small models.\\n\\nTo further validate our findings, we conducted additional experiments on **OpenELM models** [2], which exhibit better performance compared to Pythia.\\n\\n- Table 1: Line Fitting Results of OpenELM Models\\n\\t- Given the following formula of scale ratio law $$CoF_{\\\\text{lower}}=\\\\gamma \\\\cdot {R}^{-\\\\alpha}+\\\\beta$$where $R=\\\\frac{N_l}{N_s}$, we compute the coefficients with model parameters and collaboration frequency.\\n\\t- This table presents the line fitting results for oracle decoding, including fitting error, fitting coefficients, and the final\\u00a0x\\u00a0(i.e., $R^{-\\\\alpha}$) \\u00a0and\\u00a0 y\\u00a0 values on the fitting curve.\\n\\t- The results demonstrate a strong fitting effect, confirming the generalizability of our findings across different model families.\\n\\t- These results further indicate that the performance of collaborative decoding is influenced by the underlying model\\u2019s performance.\\n\\n| Task | Formula & Fitting Error | Coordinates | 270M/450M | 450M/1.1B | 1.1B/3B | 270M/1.1B | 450M/3B | 270M/3B |\\n| --------- | ------------------------------------------- | ----------- | --------- | --------- | ------- | --------- | ------- | ------- |\\n| | | Ratio | \\u22481.67 | \\u22482.44 | \\u22482.73 | \\u22484.07 | \\u22486.67 | \\u224811.11 |\\n| GSM8k | $\\\\gamma=4.66, \\\\alpha=-0.0041, \\\\beta=4.67$ | X axis | 0.9979 | 0.9964 | 0.9959 | 0.9943 | 0.9923 | 0.9902 |\\n| | MSE Loss = 1.16e-6 | Y axis | 0.0250 | 0.0320 | 0.0320 | 0.0420 | 0.0490 | 0.0610 |\\n| MMLU-STEM | $\\\\gamma=5.02, \\\\alpha=-0.0022, \\\\beta=5.04$ | X axis | 0.9989 | 0.9981 | 0.9978 | 0.9969 | 0.9959 | 0.9948 |\\n| | MSE Loss = 2.25e-6 | Y axis | 0.0280 | 0.0300 | 0.0350 | 0.0350 | 0.0420 | 0.0490 |\\n| MBPP | $\\\\gamma=23.83, \\\\alpha=-0.0007, \\\\beta=23.84$ | X axis | 0.9996 | 0.9994 | 0.9993 | 0.9990 | 0.9987 | 0.9983 |\\n| | MSE Loss = 7.04e-5 | Y axis | 0.0220 | 0.0170 | 0.0380 | 0.0200 | 0.0480 | 0.0500 |\\n\\n[2] Mehta, Sachin, et al. \\\"Openelm: An efficient language model family with open-source training and inference framework.\\\"\\u00a0_arXiv e-prints_\\u00a0(2024): arXiv-2404.\"}", "{\"title\": \"Clarifying Misunderstandings\", \"comment\": [\"Thank you for your response. We noticed there are still significant misunderstandings regarding both our response and the paper. We\\u2019d like to clarify that **our detailed response and additional results do not introduce \\u201csubstantial changes\\u201d beyond what was presented in the original submission**. Below, we address each point to provide further explanation:\", \"**Clarifying System 1 & 2 Analogy and Collaborative Decoding Setup:**\", \"A significant portion of our response is dedicated to helping the reviewer better understand the analogy of System 1 and System 2 **(Q4)** and the working mechanism of collaborative decoding **(Q2-A1)**. These aspects are already highlighted in our paper. Specifically:\", \"The motivation for high-efficiency collaboration is discussed in the Introduction (**Lines 084-098**).\", \"The operational details and experimental setups can be inferred from the related works (e.g., speculative decoding[1], contrastive decoding[2], and proxy tuning[3]) and our explanations in **Sections 3.1, 3.2, and Appendix C**.\", \"These ensure that our results are reproducible based on the information provided. Additionally, we open-source our code within a unified framework in [Anonymous Repository](https://anonymous.4open.science/r/ICLR2025_Anonymous-127D) for reference.\", \"**Additional Results and Their Relevance:**\", \"The remaining part of our response provides additional results **(Q2-A2, Q3-A2)** to demonstrate the generalizability of our findings across more tasks and models. As per the [ICLR Reviewer Guide](https://iclr.cc/Conferences/2025/ReviewerGuide), these supplementary experiments do not alter the conclusions of our submission but instead validate the existing results more thoroughly.\", \"**Clarifying Our Focus:**\", \"Once the setup is clear, it becomes evident that our study focuses on identifying common features of various collaborative decoding methods. Previous works [1,2,3] have already demonstrated the performance of collaborative decoding.\", \"In our study, the outputs of collaborative decoding are considered as ground truth (or \\u201cgolden\\u201d outputs). **Therefore, accuracy results for experiments are not the primary goal.** Instead, we aim to explore the minimal frequency and key positions of collaboration, particularly from the perspective of smaller models.\", \"Based on the visualization, we can clearly derive the common findings across various benchmarks and models, also as highlighted by Reviewer DHMi. While there may be subtle differences in the results due to variations in tasks and the capabilities of different models, these do not affect our primary findings **(Q3-A1)**. We will include a separate section to discuss these nuances.\", \"In conclusion, we believe **our submission provides sufficient experimental details for reproducibility, and our response does not introduce significant changes beyond the original paper.**\", \"We are happy to engage in further discussions to address any remaining points of confusion.\", \"[1] Leviathan, Yaniv, Matan Kalman, and Yossi Matias. \\\"Fast inference from transformers via speculative decoding.\\\"\\u00a0_International Conference on Machine Learning_. PMLR, 2023.\", \"[2] Li, Xiang Lisa, et al. \\\"Contrastive decoding: Open-ended text generation as optimization.\\\" arXiv preprint arXiv:2210.15097 (2022).\", \"[3] Liu, Alisa, et al. \\\"Tuning language models by proxy.\\\"\\u00a0_arXiv preprint arXiv:2401.08565_\\u00a0(2024).\"]}", "{\"title\": \"Response - 1\", \"comment\": \"We sincerely thank you for your positive feedback and valuable suggestions. Below, we provide our thoughts and new results addressing your questions.\\n### Q1: Discussion of Practical Applications\\nThe primary motivation behind collaborative decoding between large and small language models is to optimize the speed-performance trade-off. Previous works, such as speculative decoding, have demonstrated the effectiveness of reducing inference time. Our work generalizes this collaboration to broader methods, including **Contrastive Decoding (CD)** and **Proxy Tuning (PT)**.\\n\\nHere, we present some preliminary results for CD and PT, demonstrating their potential to optimize speed-performance trade-offs. Specifically, instead of conducting CD and PT on all tokens during text generation by small models, we focus these collaborations on a subset of mismatch tokens compared to mix-scaled models. By collaborating on these uncertain tokens alone, we achieve performance comparable to previous approaches that rely on collaborations for all tokens.\\n\\n- Table 1: Routing with Top-1 Token Logits of SLM for Contrastive Decoding\\n\\t- At each decoding step of the SLM, we determine whether to involve LLM collaboration based on the top-1 token logits of the SLM. A routing ratio of 0.0% implies decoding exclusively with the SLM, while a ratio of 100% indicates collaboration between the SLM and LLM at all steps.\\n\\t- Our results show that we can significantly reduce inference cost while maintaining comparable performance. Interestingly, when the performance gap between the SLM and LLM is small (e.g., 4B vs. 7B models), the performance improvement becomes less pronounced.\\n\\n| Contrastive Decoding | Threshold | 0.05 | 0.1 | 0.2 | 0.3 | 0.4 | 0.5 | 0.6 | 0.8 | 1.0 |\\n| -------------------- | --------- | ----- | ----- | ----- | ----- | ----- | ------ | ------ | ------ | ---- |\\n| Qwen1.5-0.5B w/ 7B | Ratio | 0.0% | 0.0% | 0.3 % | 2.3 % | 5.6 % | 11.1 % | 17.2 % | 29.9 % | 100% |\\n| | Accuracy | 17.0 | 17.0 | 17.0 | 18.0 | 25.4 | 31.0 | 34.8 | 48.2 | 54.4 |\\n| Qwen1.5-1.8B w/ 7B | Ratio | 0.0 | 0.0 | 0.2 % | 1.4 % | 4.1 % | 8.4 % | 13.9 % | 25.4 % | 100% |\\n| | Accuracy | 36.2% | 36.2% | 38.8 | 38.2 | 37.2 | 41.0 | 43.4 | 49.4 | 53.2 |\\n| Qwen1.5-4B w/ 7B | Ratio | 0.0 | 0.0 | 0.2 % | 1.3 % | 3.8 % | 8 % | 13.2 % | 24.6 % | 100% |\\n| | Accuracy | 52.2 | 52.2 | 52.6 | 51.8 | 53.2 | 51.0 | 51.4 | 51.0 | 51.0 |\\n- Table 2: Routing with Top-1 Token Logits of SLM for Proxy Tuning\\n\\t- The results for Proxy Tuning exhibit a similar trend to Contrastive Decoding. Here, a routing ratio of 0.0% denotes generation exclusively using small tuned models, while a ratio of 100% indicates generation involving both small tuned models and large base models.\\n\\n| Proxy Tuning | Threshold | 0.05 | 0.1 | 0.2 | 0.3 | 0.4 | 0.5 | 0.6 | 0.8 | 1.0 |\\n| :----------------- | :-------- | ---- | ---- | ----- | ----- | ----- | ------ | ------ | ------ | ---- |\\n| Qwen1.5-0.5B w/ 7B | Ratio | 0.0% | 0.0% | 0.3 % | 1.9 % | 5.4 % | 10.3 % | 16.4 % | 28.9 % | 100% |\\n| | Accuracy | 9.6 | 9.8 | 11.0 | 11.8 | 17.8 | 20.2 | 25.6 | 38.4 | 57.0 |\\n| Qwen1.5-1.8B w/ 7B | Ratio | 0.0% | 0.0% | 0.1 % | 1 % | 2.9 % | 6.4 % | 11.3 % | 21.9 % | 100% |\\n| | Accuracy | 33.0 | 33.0 | 33.0 | 35.4 | 39.0 | 37.8 | 44.0 | 50.2 | 57.0 |\\n| Qwen1.5-4B w/ 7B | Ratio | 0.0% | 0.0% | 0.1 % | 1 % | 3.1 % | 6.5 % | 11.5 % | 21.7 % | 100% |\\n| | Accuracy | 45.6 | 45.6 | 45.0 | 46.6 | 48.8 | 51.0 | 52.0 | 53.4 | 56.8 |\\n\\nAdditionally, these findings align with recent advancements in test-time compute scaling applications. The token-level uncertainty analysis in our work can also be applied to entropy-based decoding methods like Entropix [1], where high-entropy tokens can be handled similarly to uncertain tokens in small language models.\\nIn the above experiments, we used a simple routing mechanism based on the top-1 token logits threshold. However, this can be extended by training a more sophisticated router with richer features during decoding, which has the potential to further improve the efficiency and effectiveness of collaborative decoding.\\n\\n[1] https://github.com/xjdr-alt/entropix\"}" ] }
4ZhUKd05QM
LGDiffGait: Local and Global Difference Learning for Gait Recognition with Silhouettes
[ "Qian Zhou", "Zhongyuan Wang", "Hua Zou", "Gang Wu", "Feng Tian" ]
The subtle differences between consecutive frames of a gait video sequence are crucial for accurate gait identification, as they reflect the distinctive movement of various body parts during an individual’s walk. However, most existing methods often focus on capturing spatial-temporal features of entire gait sequences only, which results in the neglect of these nuances. To address the limitation, in this paper, we propose a new approach, named Local and Global Difference Learning for Gait Recognition with Silhouettes (LGDiffGait). Specifically, the differences within gait sequences are explicitly modeled at two levels: local window-level and global sequence-level. For the local window-level, we apply sliding windows along the temporal dimension to aggregate the window-level information, and the local movement is defined as the difference between pooled features of adjacent frames within each window. For the global sequence-level, global pooling across the entire sequence is employed, which is followed by subtraction to capture overall movement differences. Moreover, after difference feature learning, we develop a temporal alignment module to align these extracted local and global differences with the overall sequence dynamics, ensuring temporal consistency. By explicitly modeling these differences, LGDiffGait can capture the subtle movements of different body parts, enabling the extraction of more discriminative features. Our experimental results demonstrate that LGDiffGait achieves state-of-the-art performance on four publicly available datasets.
[ "Gait Recognition; Movement Difference Modeling; Temporal Modeling" ]
Reject
https://openreview.net/pdf?id=4ZhUKd05QM
https://openreview.net/forum?id=4ZhUKd05QM
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zxWV1BLpCD", "tTXjoPdyoU", "gMkysBgfNu", "aJYWWKkxv9", "UZemzs5bDm", "F3odArxUfS", "7UVswQVIKT" ], "note_type": [ "official_review", "official_review", "meta_review", "official_review", "official_review", "official_review", "decision" ], "note_created": [ 1730175222949, 1730639809837, 1734609230074, 1730573844770, 1730565291756, 1730515045490, 1737523723790 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission5762/Reviewer_9NpQ" ], [ "ICLR.cc/2025/Conference/Submission5762/Reviewer_X2dM" ], [ "ICLR.cc/2025/Conference/Submission5762/Area_Chair_FTaw" ], [ "ICLR.cc/2025/Conference/Submission5762/Reviewer_Q7Nq" ], [ "ICLR.cc/2025/Conference/Submission5762/Reviewer_YQvo" ], [ "ICLR.cc/2025/Conference/Submission5762/Reviewer_yZK7" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ] ], "structured_content_str": [ "{\"summary\": \"The paper introduces LGDiffGait, a framework for gait recognition that utilizes Local and Global Difference Modules (LDM and GDM) to capture fine-grained and broad temporal features from silhouettes. A Temporal Alignment Module (TAM) further aligns these features across sequences, resulting in state-of-the-art performance on multiple benchmark datasets.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"LGDiffGait employs a dual-level differentiation approach with a Temporal Alignment Module (TAM) that captures both subtle and broad temporal features, ensuring cohesive alignment across sequences.\", \"Sufficient Comparisons.\", \"Visualizations using t-SNE and Grad-CAM effectively illustrate the model\\u2019s attention to key regions, particularly in capturing dynamic limb movements, which enhances interpretability.\"], \"weaknesses\": [\"**Broader Comparison with Temporal Methods**: Expanding comparisons with other temporal methods would better contextualize LGDiffGait\\u2019s specific advantages, situating it within the broader landscape of temporal gait recognition models.\", \"**Validation of Temporal Method Generalizability**: To fully assess the generalizability of the temporal methods (LDM, GDM, and TAM), applying these modules to various baseline models (e.g., GaitBase and DeepGaitV2) would provide a clearer demonstration of their adaptability and effectiveness across different architectures.\", \"**Lack of Efficiency Metrics**: The absence of parameter and FLOP metrics limits understanding of the model\\u2019s computational demands, which would be valuable for assessing its scalability and efficiency.\", \"**Poor Novelty** The community may find it hard to get some ideas new from the manuscript. The local and global shifted (diff) temporal modeling has been discussed many times in previous works[1, 2, 3]. The authors have made much effort in this topic by still have not achieved impressive enough performance improvements among all the employed datasets.\", \"[1] Lin et al, GaitGL at ICCV2021\", \"[2] Lin et al, MT3D at MM2020\", \"[3] Zheng et al, MSTGait at MM2023\"], \"questions\": \"Addressing issues related to novelty, generalizability, efficiency metrics, and broader comparative analysis would further strengthen this paper's impact.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper introduces an approach to gait recognition that leverages both local and global difference learning within video silhouettes to enhance feature extraction. The method uses Local Difference Modules (LDM) and Global Difference Modules (GDM) to capture intricate motion details across both short and long temporal spans, with a Temporal Alignment Module ensuring consistency across the extracted features. The framework significantly outperforms existing methods on multiple benchmarks, demonstrating its robustness and effectiveness in gait recognition across diverse conditions.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"Well-structured paper with clear explanations of the methods and results.\", \"Demonstrates state-of-the-art results on multiple gait recognition datasets, showing improvements over existing methods.\"], \"weaknesses\": \"1. The paper introduces the concept of local and global gait differences without a thorough discussion of the underlying motivations or theoretical foundations compared to traditional spatial-temporal approaches. Insightful exploration into specific scenarios where existing methods fail could substantiate the need for this new approach. A deeper analysis would help clarify why the proposed method better captures unique gait characteristics, potentially through comparative studies or by linking the approach to fundamental biomechanical principles of human motion.\\n\\n2. Noise in silhouette data could affect difference accuracy. The reliance on pre-processed silhouette data, which is susceptible to noise from segmentation and alignment errors, raises concerns about the integrity of the gait differences captured by the model. This method's effectiveness might be compromised if these preprocessing steps introduce artifacts that are mistaken for intrinsic gait differences. The paper could benefit from a robust discussion on preprocessing techniques' reliability and strategies to mitigate their impact, ensuring that the gait differences reflect true biomechanical motion rather than processing inaccuracies.\\n\\n3. Absence of cross-dataset evaluation limits the demonstrated generalizability of the LGDiffGait model. Including such evaluations would not only validate the model's robustness across varied settings but also highlight its performance stability amidst different capture conditions and demographic variabilities. Insights into how the model performs when trained on one dataset and tested on another could underscore its utility in real-world applications and help identify potential biases or limitations in dataset-specific training.\\n\\n4. It would be advantageous for the research to examine the model's applicability to RGB data, which remains unexplored and thus limits its use in scenarios where only RGB data is available. It would be valuable to discuss or demonstrate how the model could be adapted for RGB inputs, potentially expanding its practical relevance and adoption. Exploring methodologies to integrate color and texture information available in RGB data could potentially enhance the model\\u2019s discriminatory power by leveraging additional cues beyond silhouette shapes.\\n\\n5. It would be beneficial for the paper to explore the impact of frame step size on the performance of gait recognition. Since the frame interval can significantly influence the detection of subtle gait differences, investigating optimal step sizes for different gait speeds or conditions could yield deeper insights. It would be informative to analyze how varying intervals affect the model\\u2019s ability to detect meaningful differences, which would enhance our understanding of the model\\u2019s sensitivity and operational flexibility.\", \"questions\": \"1. What are the computational costs associated with the LGDiffGait model?\\n\\n2. How does the LGDiffGait model handle noisy silhouette data resulting from poor segmentation or alignment processes during preprocessing? \\n\\n3. Are there ongoing or planned future works to adapt the LGDiffGait framework for use with RGB data? What potential methodologies or modifications are being considered to incorporate color and texture information into the current model?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This paper introduces a gait recognition method that considers both local and global difference learning in video silhouettes to enhance feature extraction, namely, incorporates local and global gait features in a unique representation able to capture motion details across both short and long temporal spans, also adopting a temporal alignment mechanism to ensure consistency across the extracted features. The framework significantly outperforms existing methods on multiple benchmarks, demonstrating its robustness and effectiveness in gait recognition across diverse conditions.\\n\\nOther than a general appreciation of this work, several negative aspects and points of discussion have been raised by the reviewers. They mainly deal with the not fully original concept introduced, the weak motivations, insufficient experimental analysis, ablations and comparative tests, poor explanation or justification of some steps of the approach proposed, somewhat unclear description of the approach, and missing discussion of the method's limitations.\\n\\nThe authors did not provide a rebuttal to these comments, hence this paper cannot be accepted for publication to ICLR 2025.\", \"additional_comments_on_reviewer_discussion\": \"No rebuttal was provided by the authors.\"}", "{\"summary\": \"The paper presents an approach for gait identification called Local and Global Difference Learning for Gait Recognition with Silhouettes (LGDiffGait). The method incorporates local and global gait features in a unique representation. The approach is evaluated on different public datasets and, compared to existing methods, provides superior results.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"3\", \"strengths\": [\"The paper considers an interesting problem, whose relevance goes beyond gait recognition\", \"The state-of-the-art is fairly discussed (with few exceptions, see Questions)\", \"The results are superior to existing approaches on different public datasets\"], \"weaknesses\": [\"The presentation of the method and the procedure flow is not fully clear (see Questions). I think this is\", \"The limitations of the approach are only briefly touched (the authors mention the computational aspects)\"], \"questions\": [\"About existing approaches: the discussion does not mention works based on architectures for sequences (e.g. LSTMs or Transformers). Are these approaches missing? Can you discuss how your approach compares to them, if existing in literature?\", \"About the need for both local and global temporal representations of motion, existing deep architectures as the SlowFast (https://arxiv.org/abs/1812.03982) addressed this problem. From what I can gather, it seems to me that the attempt of the authors is different, in the sense they want to keep the model complexity under control. Nevertheless, a discussion in relation to these already existing approaches would be beneficial to fully appreciate the intentions of the authors and to better contextualize your design choices\", \"An important influence on the performance of the method is from the silhouettes in input. Comments in this sense are missing\", \"The reader is a bit lost in the details of the method. Although in some parts they are even redundant (e.g. when describing twice, with text and with a formula, the main architectural operations), in my opinion a clear storytelling of the method is missing. In particular, it is unclear to me the flow in the forward propagation. What's the input? A single image, image pairs, the whole sequence? [Further doubts on this part are related to some of my questions below.]\", \"Related to the first point, I miss the meaning of Fig. 1. Should this be intended as an example of input? Under what circumnstances we are facing the different situations? I suggest you to provide a more detailed caption of the figure, clarifying the purpose of the figure.\", \"In sec. 3.2.2 the need for the padding is mentioned, but the technical/practical motivations are unclear\", \"It would be nice to have an intuition on the behavior of the Local Difference Module with an example (an image?)\", \"When computing the differencing steps, the procedure is reminiscent of a change detection approach. Is this correct?\", \"The index t appears only in the GDM, so it is not clear to me how the sequence is processed\", \"The presence of a triplet loss unveils that a specific training strategy is adopted, but this is introduced only in Sec. 3.3 with no appropriate discussion. How is the training organized? I suggest you to provide a more detailed explanation of the training procedure (including for instance the sampling strategies used for the input pairs) in the point of the paper you find the most appropriate.\", \"The results from all methods are very high in general, with no particular coherence between the different views or any common pattern as the viewing angle is changing. Any intuition on the reasons why? Can this give suggestions on the nature of the datasets or the generalization capabilities of the methods? What are the implications for the practical applicability of gait recognition systems?\", \"A thorough discussion on limitations would be appreciated\", \"A comment on ethical aspects is needed\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"The paper is about gait identification from videos. Only public data have been employed here, but some comments on ethical aspects may be beneficial.\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper presents a gait recognition framework named LGDDiffGait, which incorporates Local and Global Difference (LGDiff) blocks. The LGDiff block consists of two components: a Local Difference Module (LDM) and a Global Difference Module (GDM). The LDM captures local motions between adjacent frames within a sliding window, while the GDM captures global differences across the whole sequence. A Temporal Alignment Module (TAM) is further used to align the extracted local and global differences with the overall sequence dynamics. Experiments on four gait datasets demonstrate that the proposed method achieves SOTA gait recognition performance.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"Using the difference information along the temporal dimension is reasonable for enhancing gait recognition.\", \"Experimental results show the SOTA performance of the proposed method.\"], \"weaknesses\": [\"The biggest concern is the theoretical novelty of the proposed method. The use of difference features has already been explored in DyGait (Wang et al. 2023b), which is almost the same as the global difference module in this work. The primary distinction lies in the introduction of the local difference module, which shifts the extraction of difference features from the entire sequence\\u2014as utilized in the global difference module\\u2014to differences across several adjacent frames within a sliding window, which is a minor modification. In addition, the learning of local features has been widely applied in gait recognition, both in spatial and temporal domains, and is not a new concept. Consequently, these factors limit the technical contribution of this paper.\", \"The proposed framework and the approach to learning difference features are primarily tailored for the specific task of gait recognition, offering limited insights for broader tasks or other areas of representation learning. So, this paper may not be ideally suited for ICLR.\", \"In the temporal alignment module, it is explained that the difference features are aligned with the overall sequence dynamics. However, from my understanding, the temporal order is preserved when extracting difference features, so it is unclear why temporal misalignment of the difference features would occur. In addition, temporal alignment is proposed to be achieved by concatenating the main features with the difference features and further applying a 3D convolution. The rationale behind these operations for achieving temporal alignment also requires further clarification.\", \"There are a few typos in the paper; for example, in Section 3.2.4, deisned -> designed. Some descriptions of related works may not be entirely accurate. For example, SMPLGait (Zheng et al., 2022) is more accurately described as a fusion of appearance-based and model-based methods rather than purely model-based. The 3D model is used solely to learn view transformation for silhouette features, with recognition relying exclusively on silhouette features.\"], \"questions\": \"In the first row of Table 5, does this indicate that the whole LGDDiff blocks have been removed? If that is the case, does the evaluated backbone contain only 3D conv + TP + HPP, yet still achieve such high performance?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The authors found the problem of existing methods that mainly focus on extracting the feature on the entire gait sequence, so they introduced the LGDiff block to get the difference in local and global levels with a temporal alignment module to help the model focus on more detailed movement. Based on the experimental results, the performance over four datasets is higher than the SOTA methods\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper is easy to understand.\\n2. The figures are clear and the tables are easy to read.\\n3. The proposed method is reasonable in that more hand-craft features are involved in the feature extraction leading the overall performance improvement.\", \"weaknesses\": \"1. How does the alignment module align temporal information? It just combines the main and the difference features, and it is not proper to define it as 'align'\\n\\t2. How much the model size is increased? It seems it introduces a dual network to extract the difference features. And DeepGaitV2 is big enough, LGDiffGait is likely to be a larger model, so it is hard to say the improvement is solely from a nice model design or better feature\\n\\t3. The authors said the difference is an essential feature to measure the detailed movement. Do you try to use the difference only to see how it performs? The idea is similar to using the optical flow to describe the motion.\\n 4. The improvement of performance does not represent extract nuance, it may be due to overfitting on some non-gait-related objects. It is better to use an attention map or cross-domain evaluation to show the effectiveness.\\n 5. There is a lack of analysis about why this design is good and why it works well.\", \"questions\": \"Optical flow also focuses on the difference, so what is the advantage of using this LGDiff?\\nSince the difference could capture the nuances, why does the model need the main convolution branch rather than just using the difference?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}" ] }
4ZeOIf2dtC
Looking beyond the surface with Contrastive LEarning with Anti-contrastive Regularization (CLEAR)
[ "Minghui Sun", "Benjamin Goldstein", "Matthew M. Engelhard" ]
Learning representations that are robust to superficial sources of variability is important to ensure such variability does not impact downstream tasks. For instance, in healthcare applications, we might like to learn features that are useful for identifying pathology, yet have similar distributions across diverse demographic groups, leading to more accurate and equitable diagnoses regardless of background or surface characteristics. More broadly, this capability can improve the generalizability of our representations by mitigating unwanted effects of variability not seen during training. In this work, we suppose that data representations can be semantically separated into two components: $content$ and $style$. The $content$ consists of information needed for downstream tasks -- for example, it is predictive of the class label in a downstream classification problem -- whereas the $style$ consists of attributes that are superficial in the sense that they are irrelevant to downstream tasks, yet may compromise performance due to associations observed in training data that do not generalize. Here we propose a weakly supervised framework, Contrastive LEarning with Anti-contrastive Regularization (CLEAR), to effectively disentangle $content$ and $style$ in the latent space of a Variational Autoencoder (VAE). Our anti-contrastive penalty, which we call Pair Switching (PS), uses a novel label flipping approach to ensure content is recognized effectively and limited to the $content$ features. We perform experiments to quantitatively and qualitatively evaluate CLEAR-VAE across distinct data modalities. We then analyze the trade-off between disentanglement and ELBO, and the impact of various hyperparameters within our framework. Our results show that using disentangled representations from CLEAR-VAE, we can: (a) swap and interpolate $content$ and $style$ between any pair of samples, and (b) improve downstream classification performance in the presence of previously unseen combinations of $content$ and $style$.
[ "Weakly Supervised Learning", "Disentangled Representation Learning", "Variational Autoencoder", "Contrastive Learning" ]
https://openreview.net/pdf?id=4ZeOIf2dtC
https://openreview.net/forum?id=4ZeOIf2dtC
ICLR.cc/2025/Conference
2025
{ "note_id": [ "pyBsLDyLL3", "gPVn9mslS4", "LDERL38HZd", "9tVfcdFGQv", "2nykWChsvv", "2herAGaLY5" ], "note_type": [ "comment", "official_review", "official_review", "official_review", "official_review", "official_review" ], "note_created": [ 1731905050307, 1730147013512, 1730447969771, 1730281098425, 1730479238565, 1730758074632 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission4033/Authors" ], [ "ICLR.cc/2025/Conference/Submission4033/Reviewer_VmPE" ], [ "ICLR.cc/2025/Conference/Submission4033/Reviewer_Hdz9" ], [ "ICLR.cc/2025/Conference/Submission4033/Reviewer_brDh" ], [ "ICLR.cc/2025/Conference/Submission4033/Reviewer_nQdV" ], [ "ICLR.cc/2025/Conference/Submission4033/Reviewer_6DHk" ] ], "structured_content_str": [ "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\", \"comment\": \"We appreciate the reviewers' thoughtful comments and valuable insights. After careful reflection and consideration, we have decided to respectfully withdraw our manuscript.\"}", "{\"summary\": \"This work proposes a weakly supervised method to disentangle content and style latent variables in variational autoencoders (VAEs), where content variables are those that have a (non-spurious) correlation with the downstream task to solve, and style variables are the rest of them (which should not affect the downstream task). To this end, the authors propose a combination of three losses: a $\\\\beta$-ELBO to encourage disentangle representations, a contrastive loss to incentivize content features from the same downstream label to be equal, and another contrastive loss to incentivize the style features from different downstream labels to be equal. The authors demonstrate empirically the efficacy of the proposed method by: qualitatively generating samples that swap/traverse the content and style variables; and quantitatively by comparing the performance of their model with another VAE model (ML-VAE).\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": [\"**S1** The intuition of the method is quite clear.\", \"**S2** The problem of finding disentangled representations (content vs. style) in latent space is of interest to the community.\", \"**S3** The qualitative results are quite impressive, and the quantitative ones are clear as well.\", \"**S4** The setup for the quantitative experiments are clearly detailed.\"], \"weaknesses\": \"The manuscript has many things to improve in its current state. While I will point some of the most critical ones below, this is no an exhaustive list and I did not dive as deep in other aspects (e.g. experiments) since they would require the list below to be addressed:\\n\\n- **W1.** The paper's writing contains several typographical errors (L40, L147, L201, L218, L247), incoherent/unnecessary statements (L63,L149-L151), and many unexplained terms (L63, L155, L194, Eq. 9, L215, L253, contrastive learning is given as related work but not explained, etc), and dubious examples (e.g. the one describe for the medical domain is questionable at best, since differences _directly caused_ by the sensitive attributes should **not** be removed from the predictions).\\n- **W2.** The maths contain many typos (Eq 1 should negate the loss, Eqs 4,5,6 are missing a closing parenthesis, Eq. 8 writes $c$ instead of $s$, etc.) and questionable statements (L184: you cannot _reasonably_ assume the posterior factorizes, as they are _dependent_ given $x$; L204: what is the norm of two comma-separated terms?; why not using the definitions in Eq. 5 and 6 in the section before?; L236: Why would having a negative value complicate the minimization?; L248: why would the model reach an equilibrium at all?; etc.). This is on top of statements that find little explanations/intuitions (e.g. L219: why would the content get mixed with the style feratures, if they are predictive of the label?)\\n- **W3.** The most pressing issue is the literature review (and therefore the baselines in the experiments) which is really weak and misses a lot of relevant works. To name just a few, there are a number of works on identifying content from style variables (for example, [1,2]), which has also been applied to multimodal VAEs [3]. Moreover, contrastive learning has already been applied to successfully learn multimodal representations [3,4]. All these works (which form by no means an exhaustive list) are relevant and the authors should discuss and compare with them.\\n\\n---\\n\\n[1] [Self-Supervised Learning with Data Augmentations Provably Isolates Content from Style](http://arxiv.org/abs/2106.04619)\\n\\n[2] [Multi-View Causal Representation Learning with Partial Observability](http://arxiv.org/abs/2311.04056)\\n\\n[3] [Identifiability Results for Multimodal Contrastive Learning](https://openreview.net/forum?id=U_2kuqoTcB&s=09)\\n\\n[3] [Relating by Contrasting: A Data-efficient Framework for Multimodal Generative Models](http://arxiv.org/abs/2007.01179)\", \"questions\": [\"**Q1.** What is the difference in the text between a group and a class? Are they the same?\", \"**Q2.** What is the Jeffrey divergence?\", \"**Q3.** Where is the naming \\\"anti-contrastive\\\" coming from?\", \"**Q4.** Why explaining a VAE as only having Gaussian-distributed posteriors/likelihoods? I know it is the common practice, but still.\", \"**Q5.** How do you decide the dimensionality of the content and style variables? And why did you decide to have an autoencoder (rather than just an encoder) if only the content is relevant for the downstream task?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces CLEAR-VAE (Contrastive Learning with Anti-contrastive Regularization for Variational Autoencoders), a framework designed to disentangle content and style components in data representations. The authors propose a new Pair Switching (PS) technique to ensure style features remain independent of the content, enhancing model robustness against superficial variations. CLEAR-VAE\\u2019s performance is demonstrated across multiple datasets, including Styled-MNIST, CelebA, and Amazon Product Reviews, with both qualitative and quantitative evaluations of the learned disentangled representations.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"**Novelty and Contribution**:\", \"The authors address the challenge of disentangling style and content without relying on explicit style labels. The proposed PS technique and weakly supervised contrastive framework represent a meaningful advance in disentangled representation learning.\", \"The disentanglement method has potential applications in real-world scenarios where spurious correlations in training data (e.g., demographic biases) can affect model generalizability and fairness.\", \"**Thorough Experimental Analysis**:\", \"The paper presents comprehensive experiments, including swapping and interpolation tasks, to visually illustrate CLEAR-VAE\\u2019s disentangling capabilities. Moreover, quantitative measures of generalizability on unseen style-content combinations reinforce the practical benefits of CLEAR-VAE.\"], \"weaknesses\": [\"**Limitations of Experimental Setup**:\", \"The study uses relatively simplified datasets, such as Styled-MNIST and CelebA, for experiments, which may limit the understanding of CLEAR-VAE\\u2019s performance on more complex data. Testing the model on more challenging real-world datasets, particularly those with higher-dimensional variations in style (e.g., medical imaging data), would better showcase its generalizability and robustness.\", \"**Limited Baselines**: There are limited comparisons to baseline approaches. In [1], more approaches working in the weakly-supervised setting are discussed that could also serve as VAE-baselines, e.g. Group VAE. [2] propose a method that can automatically infer the group size of shared factors during training, which might at least be worth discussing in the related work section. And what about comparisons to other contrastive learning approaches?\", \"**Motivation**: I am not sure the motivation for the approach is clear to me. Given that the content labels are available during training and the assumptions state that only content labels are necessary for downstream prediction tasks, why are we training a VAE and not a supervised classifier?\", \"---\", \"[1] Locatello et al., \\\"Weakly-Supervised Disentanglement Without Compromises\\\", ICML 2020\", \"[2] Sutter et al., \\\"Differentiable Random Partition Models\\\", Neurips 2023\"], \"questions\": [\"supervised contrastive learning: Would it make sense to include the paper \\\"Supervised Contrastive Learning\\\" by Khosla et al.? It is based on a similar idea to combine labels and contrastive learning.\", \"are the assumptions realistic? This question is related to the motivation of the paper. How likely is it that we will have access to all the content labels during training? It would be interesting to analyze what happens if it is not the case. What about an experiment where only a subset of content labels is available?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper presents a novel framework called CLEAR (Contrastive Learning with Anti-contrastive Regularization), designed to effectively disentangle content and style in data representations. CLEAR utilizes a single set of content labels (information required to perform a downstream task of interest), while the style source of variability (consisting of attributes irrelevant to the downstream task) is not explicitly required for model training. CLEAR introduces Pair Switching, a Contrastive Learning-inspired regularization loss and a label-switching method that enables the content latent space to learn only the content information. This framework allows for the separation of relevant content features from superficial style attributes, ultimately enhancing the generalizability of models in downstream tasks. Experimental results demonstrate that CLEAR not only facilitates the interpolation and swapping of content and style between samples but also improves classification performance in scenarios with unseen combinations of content and style. Overall, the proposed method offers significant advancements in creating robust representations that mitigate the effects of unwanted variability, particularly in sensitive applications like healthcare.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"This paper effectively motivates the subject of disregarding style information within the content latent space by using the example of a medical healthcare application. It soundly presents the theoretical foundations of the concepts it utilizes (ELBO loss and variational auto-encoder, Mutual Information Gap, a classifier-free disentanglement measurement method, etc.). The paper illustrates its results well, both quantitatively (showing generalization in a downstream imbalanced classification scenario and demonstrating that encouraging disentanglement promotes generalization) and qualitatively (with intuitive visual experiments using CelebA and Style MNIST).\", \"weaknesses\": \"Overall, it seems that Variational Autoencoders are not the latest state-of-the-art models in terms of both 1) generation and 2) representation learning, which dampens the practical use of this method compared to diffusion models and contrastive learning methods. Additionally, the comparison with existing methods appears to be relatively weak, and similar approaches could have been discussed or compared more thoroughly, particularly the Capturing Characteristics VAE (cc-VAE) [A].\\n\\nFurthermore, the Pair-Switching loss could have been compared with standard Mutual Information minimization losses, such as Total Correlation [B] or kernel Joint Entropy Minimization loss [C].\\n\\nUltimately, the experimental section seems relatively weak, as it only involves three datasets. Experiments in healthcare applications with real-world impact would have been expected.\\n\\n[A] Capturing Label Characteristics in VAEs, T. Joy et al. ICLR 2021, https://arxiv.org/abs/2006.10102. \\n[B] Abubakar Abid and James Zou. Contrastive Variational Autoencoder Enhances Salient Features, 2019, https://arxiv.org/abs/1902.04601. \\n[C] Separating Common from Salient Patterns with Contrastive Representation Learning, R. Louiset et al., ICLR 2024, https://arxiv.org/pdf/2402.11928.\", \"questions\": \"How can this method be applied in a healthcare scenario where unwanted style attributes are often partially observed (e.g., acquisition site, demographic attributes such as age and sex are generally available in practice)? Could this approach be adapted into a semi-supervised framework that accounts for partially observed style attributes?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces CLEAR-VAE, a novel weakly supervised framework designed to learn semantically disentangled latent representations of content and style within VAEs. Unlike traditional methods, CLEAR-VAE leverages contrastive pairs rather than explicit ground truth labels, offering a more flexible approach to disentangling these features. Specifically, content encompasses information critical for downstream tasks, while style includes superficial and irrelevant attributes to those tasks.\\n\\nCLEAR-VAE achieves disentanglement by extracting group-level content representations through groups organized by ground truth labels and by separating style from content representations without requiring labels for style attributes. To accomplish this, the framework enhances the standard $\\\\beta$-VAE loss by introducing two additional penalties: (1) a contrastive regularization adapted from the SNN loss, which encourages similarity in $z^{(c)}$ (content) representations within the same downstream label, and (2) an anti-contrastive regularization that promotes ambiguity in $z^{(s)}$ (style) regarding the downstream label.\\n\\nTo evaluate its effectiveness, the paper provides both qualitative and quantitative experiments. The qualitative results confirm the successful disentanglement of content and style representations. At the same time, quantitative comparisons against ML-VAE and baseline models demonstrate CLEAR-VAE\\u2019s superior performance and generalizability to unseen combinations of style and content.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. **Problem Significance**: This paper addresses a significant challenge\\u2014disentangling content and style representations in VAEs. This approach not only enhances interpretability but also increases the model's robustness against superficial sources of variability, making it better suited for downstream tasks.\\n\\n2. **Elegance of Anti-Contrastive Regularization**: The design of the anti-contrastive regularization is both elegant and effective. By flipping the labels of positive and negative pairs in $L_{SNN}^{(s)}$, it achieves a similar effect as directly minimizing $-L_{SNN}^{(s)}$ while ensuring that the regularization term remains non-negative. This method simplifies model optimization, eliminating potential issues related to negative terms.\\n\\n3. **Comprehensive Evaluation**:\\n - **Qualitative**: The results of the swapping and interpolation operations are impressive, effectively showcasing the model's disentanglement capabilities (see Fig. 3, Fig. 4, Appx. A.6).\\n - **Quantitative**: The setups for evaluating the model's generalizability are well-defined, offering clear insights into performance (e.g., Table 2).\", \"weaknesses\": [\"1. **Core Innovations and Clarity**: The core contributions of this paper, according to the authors, are a weakly supervised framework and an anti-contrastive regularization for style representation.\", \"**Weakly Supervised Framework**: For semantic disentanglement of style and content, the authors introduce a weakly supervised framework. However, they omit comparisons with recent similar works, which could lead to misunderstandings that this is the first use of a weakly supervised framework in this context. For instance, other papers have also adopted weakly supervised approaches, such as:\", \"*[\\\"SW-VAE: Weakly Supervised Learning of Disentangled Representations via Latent Factor Swapping\\\"](https://arxiv.org/abs/2209.10623)*\", \"*[\\\"Weakly Supervised Disentangled Generative Causal Representation Learning\\\"](https://jmlr.org/papers/v23/21-0080.html)*\", \"**Anti-Contrastive Regularization**: The authors state that the benefit of anti-contrastive regularization over direct optimization of $-L_{SNN}^{(s)}$ is that it avoids complicating the minimization of final loss. However, this claim lacks further explanation or experiment validation, which weakens its persuasiveness regarding the ''complication'' mentioned.\", \"2. **Some Mathematical Errors and Confusing Sentences**:\", \"The VAE loss in Equation 1 should be the negative ELBO.\", \"The two symbols $L_{SNN}^{(s)}$ and $L_{SNN^{(s)}}$ have been used interchangeably, confusing the reader.\", \"In Equation 9, *z\\\\** is supposed to represent the latent variable with the maximum normalized mutual information, but there is no annotation provided.\", \"In line 358, it is unclear why contrastive regularization is applied specifically to the EOS token's latent representation, while KL regularization applies to the entire set of contextualized embeddings.\"], \"questions\": \"1. In the quantitative experiments, the performance advantage of CLEAR-VAE over the baseline model exhibits differing trends across datasets as the value of $K$ increases(Fig. 6). Specifically, on the styled-MNIST dataset, this advantage diminishes, whereas on the CelebA dataset, it amplifies. Given that both datasets are image-based, it would be beneficial for the authors to provide an explanation for this discrepancy.\\n2. In line 233, the phrase *\\\"we will encourage the representation to be ambiguous about the supervised label\\\"* raises a question: does this imply minimizing $ I(z^{(s)}; y) $ or maximizing $gMIG(y)$?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces an amazing new framework named Contrastive Learning with Anti-contrastive Regularization (CLEAR-VAE). It's designed to take data representation disentanglement to the next level by separating essential \\\"content\\\" features from irrelevant \\\"style\\\" attributes. CLEAR-VAE takes the Variational Autoencoder (VAE) model to the next level by integrating a cutting-edge Pair Switching (PS) anti-contrastive regularization. This mechanism is a game-changer! It effectively disentangles content from style representations in a weakly supervised setting by penalizing style features with different content labels. This framework is a game-changer in representation learning. It allows data with similar labels to maintain similar content while style remains independent, enhancing model generalization on unseen data. CLEAR-VAE has been evaluated across image and text datasets, and it has shown incredible potential in enhancing classification performance by effectively managing content and style in latent representations. It has introduced PS regularization, content-style swapping experiments, and a novel metric to quantify disentanglement efficacy.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The use of Pair Switching (PS) as an anti-contrastive regularization method is new and directly addresses limitations in previous disentanglement approaches by encouraging independent style distributions across content labels.\\n\\nBy disentangling content and style, CLEAR-VAE enhances classification accuracy on previously unseen content-style combinations, which is valuable in applications sensitive to bias or style variance.\\n\\nThe framework's potential to mitigate biases in healthcare and other applications underscores its real-world impact, highlighting the model\\u2019s relevance to equitable decision support.\", \"weaknesses\": \"While the paper mentions the impact of temperature and similarity metrics in SNN loss, it could benefit from a deeper analysis of hyperparameters and their effects on performance, especially in complex datasets.\\n\\nThe paper could improve its impact by comparing CLEAR-VAE\\u2019s performance with other recent disentanglement methods.\\n\\nSince CLEAR-VAE relies on content labels as the only form of weak supervision, the approach may be limited when label noise or ambiguities exist. This limitation is particularly relevant in real-world applications where ground-truth labels are less reliable.\", \"questions\": \"Please see the weaknesses section\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}" ] }
4ZX2a3OKEV
Solving hidden monotone variational inequalities with surrogate losses
[ "Ryan D'Orazio", "Danilo Vucetic", "Zichu Liu", "Junhyung Lyle Kim", "Ioannis Mitliagkas", "Gauthier Gidel" ]
Deep learning has proven to be effective in a wide variety of loss minimization problems. However, many applications of interest, like minimizing projected Bellman error and min-max optimization, cannot be modelled as minimizing a scalar loss function but instead correspond to solving a variational inequality (VI) problem. This difference in setting has caused many practical challenges as naive gradient-based approaches from supervised learning tend to diverge and cycle in the VI case. In this work, we propose a principled surrogate-based approach compatible with deep learning to solve VIs. We show that our surrogate-based approach has three main benefits: (1) under assumptions that are realistic in practice (when hidden monotone structure is present, interpolation, and sufficient optimization of the surrogates), it guarantees convergence, (2) it provides a unifying perspective of existing methods, and (3) is amenable to existing deep learning optimizers like ADAM. Experimentally, we demonstrate our surrogate-based approach is effective in min-max optimization and minimizing projected Bellman error. Furthermore, in the deep reinforcement learning case, we propose a novel variant of TD(0) which is more compute and sample efficient.
[ "Variational Inequality", "Optimization", "Surrogate", "Projected Bellman Error", "Min-max Optimization" ]
Accept (Poster)
https://openreview.net/pdf?id=4ZX2a3OKEV
https://openreview.net/forum?id=4ZX2a3OKEV
ICLR.cc/2025/Conference
2025
{ "note_id": [ "pTNi61BeHl", "jQd0O9fSvu", "h5UW9Dhp5d", "fOqltgQ4hW", "erLys8Hce7", "d2oEuqiCHa", "ZyhUSsx9wl", "YH6opGqJqv", "XZyKLauZsI", "OZAEJgoZYx", "NbgUT0NoQ8", "KjLAOaKfE9", "BCoV2LTo7J", "ASxOyZaeUD", "6AvyE1Xylg", "5V4Dxm9HqY", "3knepstKby", "0onKAcFRpM" ], "note_type": [ "official_comment", "official_review", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review" ], "note_created": [ 1731976362813, 1730160798599, 1734713025862, 1731976185229, 1731976627330, 1731975123253, 1731975676798, 1732758161262, 1730657662112, 1737524194791, 1732323492289, 1731976805601, 1731975154617, 1731976660757, 1732265854386, 1730396901316, 1731975765682, 1730618406894 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission12488/Authors" ], [ "ICLR.cc/2025/Conference/Submission12488/Reviewer_HyYK" ], [ "ICLR.cc/2025/Conference/Submission12488/Area_Chair_Qn8W" ], [ "ICLR.cc/2025/Conference/Submission12488/Authors" ], [ "ICLR.cc/2025/Conference/Submission12488/Authors" ], [ "ICLR.cc/2025/Conference/Submission12488/Authors" ], [ "ICLR.cc/2025/Conference/Submission12488/Authors" ], [ "ICLR.cc/2025/Conference/Submission12488/Authors" ], [ "ICLR.cc/2025/Conference/Submission12488/Reviewer_LPnR" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission12488/Reviewer_HyYK" ], [ "ICLR.cc/2025/Conference/Submission12488/Authors" ], [ "ICLR.cc/2025/Conference/Submission12488/Authors" ], [ "ICLR.cc/2025/Conference/Submission12488/Authors" ], [ "ICLR.cc/2025/Conference/Submission12488/Reviewer_L5s9" ], [ "ICLR.cc/2025/Conference/Submission12488/Reviewer_L5s9" ], [ "ICLR.cc/2025/Conference/Submission12488/Authors" ], [ "ICLR.cc/2025/Conference/Submission12488/Reviewer_eBkP" ] ], "structured_content_str": [ "{\"title\": \"Author Response Continued\", \"comment\": \"> (1) In the basic case of supervised learning with a scalar loss, can we expect the proposed method perform better than off-the-shelf optimizers that work directly in the parameter space, i.e., Adam?\\n\\nIn our paper we have focused on the VI setting and shown significant data efficiency improvements by using a surrogate loss approach. **We would also like to emphasize that our approach is optimizer agnostic. Our surrogate loss approach actually allows one to use Adam in the inner loop if they so choose.** If we take one gradient step instead of multiple steps in the inner loop then our method is exactly gradient descent where the stepsize is $\\\\eta\\\\cdot\\\\eta_{alg}$ where $\\\\eta$ is the stepsize in the surrogate loss and $\\\\eta_{alg}$ is the stepsize used by the gradient step in the inner loop. Therefore our approach includes Adam as a special case. [1] studied the use of surrogate losses with a fixed stepsize inner loop in the supervised learning case and showed comparable results to Adam. It is unclear if they tried Adam in the inner loop of their surrogate method. In our RL experiments we used AdamW in our innerloop and found it to outperform AdamW with one step (i.e. TD(0)). \\n\\n> (2) The condition in the while loop of Algorithm 1 can not be verified. How could we let alpha be the user-defined parameter?\", \"in_the_paper_we_outline_cases_and_several_approaches_to_deal_with_this\": [\"In Section 4 we discuss some cases where a finite number of gradient steps guarantees any $\\\\alpha$ therefore you can just tune the number of gradient steps\", \"In a highly over parameterized regime, $\\\\ell_t^\\\\ast$ is 0 and therefore $\\\\alpha$ can be checked directly. We can then optimize the inner loop until it is satisfied.\", \"In practice a finite number of gradient steps works well practice across a wide range of tasks\", \"Min max optimization (our paper)\", \"Minimizing projected Bellman error (our paper)\", \"Supervised learning [1]\", \"Policy gradient RL [3]\", \"In comparison to all the existing works, despite ours being in the more difficult VI setting, our descent condition is the closest to understanding why a small number of gradient steps can converge to a good solution.\"], \"references\": \"[1] Lavington, J.W., Vaswani, S., Babanezhad Harikandeh, R., Schmidt, M. &amp; Le Roux, N.. (2023). Target-based Surrogates for Stochastic Optimization. <i>Proceedings of the 40th International Conference on Machine Learning</i>, in <i>Proceedings of Machine Learning Research</i> 202:18614-18651 Available from https://proceedings.mlr.press/v202/lavington23a.html.\\n\\n[2] Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. Understanding deep learning requires rethinking generalization. In International Conference on Learning Rep\\u0002resentations, 2017. URL https://openreview.net/forum?id=Sy8gdB9xx.\\n\\n[3] Sharan Vaswani, Olivier Bachem, Simone Totaro, Robert M\\u00fcller, Shivam Garg, Matthieu Geist, Marlos C. Machado, Pablo Samuel Castro, Nicolas Le Roux Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, PMLR 151:8619-8649, 2022.\"}", "{\"summary\": \"This paper studies the problem of solving variational inequalities with deep learning methods with a hidden monotone structure. The authors present an iterative optimization algorithm with two nested loops, where In the outer loop, a surrogate square loss is constructed and partially minimized in the inner loop (until a sufficient decrease condition called $\\\\alpha$-descent is satisfied) using an optimiser such as a quasi-newton method or ADAM. When $\\\\alpha$ is sufficiently smaller than $1$, The authors prove linear (in the outer iterations) convergence guarantees in deterministic and stochastic settings, where in latter, the algorithm converges to a neighbourhood of the solution. They also prove that, when considering general variational inequalities, $\\\\alpha < 1$ is not sufficient to guarantee convergence. Further, they show how several methods can be seen as special cases of their algorithm. They also present experimental results on min-max optimization and reinforcement learning.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. The theoretical analysis is quite thorough, considering deterministic and stochastic cases and also addressing an issue with a previous analysis. The authors also explain how previous methods fit in their framework and discuss assumptions and potential limitations.\\n2. Promising experiments showing that the method can achieve faster convergence with more than one inner step iteration also in practical settings such as reinforcement learning with MLP value functions.\", \"weaknesses\": \"1. Certain parts lack clarity. Condition 2 in Theorem 3.2 seems unnecessary and needs refinement. (See question and comments).\\n2. The paper lacks larger scale experiments. For example Deep RL experiments (with bigger underlying neural networks) could be included to demonstrate the claimed scalability of the method.\", \"questions\": \"Questions:\\n1. You claim that Sakos et al. (2024) assumes $\\\\mathcal{Z}$ bounded implicitly in their main lemma. Can you clarify which lemma, where it is assumed and which results are affected? I could not easily find it and I think this is an important point since it uncovers a fallacy in a previous analysis.\", \"comments\": [\"The second condition on $\\\\alpha$ in theorem 3.2 seems to be always satisfied by setting $p=1$ and $C\\u200e\\u2009=\\u2009\\\\alpha/\\\\eta$. From the proof it appears that there is an hidden dependency between $\\\\eta$, $C$, $p$, and $\\\\alpha$. The statement should either include such dependency or at least clarify this aspect.\", \"The use of the term \\u201cgradient step\\u201d contrasts with the Variational inequality formulation where in general $F$ is not a gradient. A possible alternative could be \\\"proximal step\\\".\", \"The related work paragraph in the introduction should probably be a subsection or its name should be removed or changed.\", \"The PL condition is not properly defined in the main text. It could be defined close to Line 345 or at least in the appendix.\"], \"references\": \"Sakos, Iosif, et al. \\\"Exploiting hidden structures in non-convex games for convergence to Nash equilibrium.\\\" Advances in Neural Information Processing Systems 36 (2024).\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This paper studies a class of variational inequalities by assuming hidden monotonicity. The paper proposes a surrogate-based approach by constructing a surrogate and employ any optimizer to ensure a sufficient decrease condition. The paper provides some convergence analysis under deterministic and stochastic settings. The experiments on done for some toy problems and RL problems. All reviewers agreed the paper has made some nice contributions and recommend an acceptance.\", \"additional_comments_on_reviewer_discussion\": \"The authors have provided detailed rebuttal. Some reviewers have acknowledge that their concerns have been addressed but also mentioned some weakness in terms of strong assumptions.\"}", "{\"title\": \"Author Response\", \"comment\": \"Thank you for your review. Below we address your comments and concerns in detail. We believe surrogate losses have played an important role in modern ML such as in policy gradient methods in reinforcement learning and we are excited to bring this approach to the VI setting. Despite the limitations that we have outlined in the paper we believe our results to be an important contribution to:\\n1. understand why surrogate loss methods work with a small number of gradient steps\\n2. understand why we should or shouldn't expect them to work in the VI setting if at all\", \"in_doing_so_we_have_shown\": \"1. An optimizer agnostic approach to reducing VI problems to scalar minimization problems with convergence guarantees. Our approach does not assume what optimizer is used and is compatible with any deep learning optimizer. We discuss methods like Gauss-Newton to show how existing methods are a special case of our approach.\\n2. Provide a novel surrogate loss approach to learning value functions in RL and demonstrated a **significant improvement over data efficiency** when compared to TD(0), a standard approach to value learning\\n3. We have provided a non-trivial example to show that surrogate losses in VI problems are strictly more difficult than scalar minimization\\n\\n\\n> The paper is challenging to follow, particularly in its transition from problem (1) to the construction of the surrogate model, where additional discussion would be beneficial. \\n\\nWe devote most of Section 2 \\\"Surrogate Loss Background\\\" to discuss how problem (1) is related to different examples like supervised learning and min-max problems. In this section, we also discuss the intuitions behind the surrogate approach (i.e. approximating a gradient step in the prediction space) and why this is beneficial when there is hidden structure. We also connect all of the different components in problem (1) to these applications (e.g. $g$, $\\\\mathcal{Z}$, and $F$).\\n\\n \\n> The assumptions also seem overly restrictive. For instance, while assuming convexity of the loss with respect to the model's output is reasonable for most loss functions, the assumption that the constrained domain is convex feels unnecessarily limiting, even though the authors provide a few narrow examples.\\n\\nThank you for this comment, indeed this is a restriction of our analysis as we have outlined in Section 2. However, we would like to highlight a few points regarding this assumption and believe that our results even with this assumption are an important step to extending the surrogate approach to VI problems.\\n\\n+ We linked to two interesting extreme cases when $\\\\mathcal{Z}$ is convex: (1) when a linear model is used, and (2) when the model is large enough to interpolate any dataset. In fact (2) has been shown to not be so extreme where large capacity neural networks are able to interpolate random noise [2]. \\n+ We provide the first extension of surrogate methods to VI problems with convergence to the global solution of the VI. In contrast, previous works that achieved similar results are **only in the scalar minimization case** where **they also assume the set of predictions is convex** [1].\\n+ This assumption in fact **strengthens** our negative result since even when $\\\\mathcal{Z}$ is convex, divergence is possible for $\\\\alpha<1$\\n + This is surprising since $\\\\alpha <1$ is enough for convergence in scalar minimization even without convexity of the loss or the set of predictions! This demonstrates how much more difficult it is to theoretically analyze a surrogate approach in the VI case. \\n\\n\\n\\n\\n> Furthermore, the alpha-descent condition (5) requires closer examination, as it appears to be a stringent requirement. Specifically, it requires a single constant alpha that holds uniformly across all t\\n\\nThe alpha condition indeed needs to hold for all $t$, however, we believe it is actually less stringent than existing approaches that require the error $\\\\ell_t(\\\\theta_{t+1})-\\\\ell_t^\\\\ast < \\\\epsilon$ to be bounded by a small constant $\\\\epsilon$ across all $t$ (e.g. [1]). \\nOur $\\\\alpha$ descent condition provides an improvement on multiple fronts:\\n\\n+ The $\\\\alpha$-descent condition is a more accurate model than just bounding the error when a moderate amount of optimization is used at each $t$ (e.g. a small number of gradient steps). In Section 4 we highlight some cases.\\n+ We achieve convergence to the solution $z_\\\\ast$ without assuming the errors $\\\\||z_{t+1}-z_t^\\\\ast\\\\||$ to be summable or going to zero a priori, which is common even in the scalar minimization case (see the related work paragraph in Section 2 lines 146-155 for discussion).\"}", "{\"title\": \"Author Response\", \"comment\": \"Thank you for you comments and questions, they will improve the paper. Below we have addressed all your raised concerns and look forward to your response.\\n\\n> Certain parts lack clarity. Condition 2 in Theorem 3.2 seems unnecessary and needs refinement. (See question and comments).\\n\\nWe address condition 2 in more detail below.\\n\\n> The paper lacks larger scale experiments. For example Deep RL experiments (with bigger underlying neural networks) could be included to demonstrate the claimed scalability of the method.\\n \\nFor our Mujoco RL experiments we selected a standard network architecture (2 hidden layer MLP) [3,4,5]. Our intention was not to select a small model but to demonstrate our approach in a standard Mujoco setup. However, we have included an experiment with 16 layers in stead of 2 to demonstrate the scalability of our approach, see Figure 10 in the appendix. Since in RL the bottleneck is mostly due to environment interaction we see comparable runtimes to the 2 layer experiments. Additionally the performance is similar, as expected due two-layers being sufficient.\\n\\nWe would also like to emphasize that the scalability of our approach highly depends on the inner-loop optimizer. Since in practice GD with a small number of steps is sufficient (as demonstrated in our experiments and previous works), the surrogate loss approach is on the same order of complexity as SGD without a surrogate.\\n\\n\\n> You claim that Sakos et al. (2024) assumes bounded implicitly in their main lemma. Can you clarify which lemma, where it is assumed and which results are affected?\\n \\nThe set of predictions $\\\\mathcal{Z}$ or as refered to by Sakos et al. the set of latent variables $\\\\mathcal{X}$ is assumed to be bounded in Lemma 4, their \\\"template inequality\\\". More precisely, in the proof of Lemma 4 the existence of a constant $D = diam(\\\\mathcal{X})$ (i.e. $\\\\mathcal{X}$ is a set with bounded diameter) is used in equation B.20 in Appendix B. Lemma 4 is then used in Theorems 1,2,3,4. \\n\\n> The second condition on in theorem 3.2 seems to be always satisfied by setting $p=1$ and $C=\\\\alpha/\\\\eta$. From the proof it appears that there is an hidden dependency between $\\\\eta$, $C$, $p$, and $\\\\alpha$. The statement should either include such dependency or at least clarify this aspect.\\n\\nThank you for this comment, we will clarify more precisely. What we meant exactly by this condition is that $\\\\alpha$ can be made smaller by making $\\\\eta$ smaller. That is\\n$\\\\exists C, p > 0$ such that $\\\\forall \\\\eta > 0$ it holds $\\\\alpha < C\\\\eta^p$.\\n\\nWhile the first condition is independent of $\\\\eta$ the second one is more subtle. The example given above would not satisfy the condition since we require the inequality to hold for all $\\\\eta$ (or more generally for all $\\\\eta$ small enough).\\nThe intuition behind this condition is that the surrogate loss $\\\\ell_t(\\\\theta) = \\\\frac{1}{2}\\\\||g(\\\\theta) - (g(\\\\theta_t)-\\\\eta F(g(\\\\theta_t)))\\\\||^2$ becomes easier as $\\\\eta$ gets smaller allowing the current predictions $z_t=g(\\\\theta_t)$ to start close to the exact PGD step $z_t^\\\\ast$. We suspect some quasi-Newton methods may be able to take advantage of this structure and thought it to be useful to present this alternative condition on $\\\\alpha$ as one way to quantify this.\\n\\n> The use of the term \\u201cgradient step\\u201d contrasts with the Variational inequality formulation where in general is not a gradient. A possible alternative could be \\\"proximal step\\\".\\n\\nWe agree that this terminology can sometimes be confusing however it is common practice to refer to the update rule\\n$$ z_{t+1} = z_t - \\\\eta F(z_t)$$\\nas the \\\"gradient method\\\" in the VI context. \\nOne example is the infamous extragradient (EG) method by Korpelevich. EG is mostly used in the VI context where there may not be a \\\"gradient.\\\" In fact, in Korpelevich's seminal paper he refers to the update above as the \\\"gradient method\\\" [1].\", \"we_disagree_that_the_proximal_method_would_be_an_appropriate_description_as_it_is_usually_meant_to_represent_the_following_update_rule\": \"$$z_{t+1}= z_t -\\\\eta F(z_{t+1}).$$\\nSee for example [2].\\n\\n> The related work paragraph in the introduction should probably be a subsection or its name should be removed or changed.\\n\\nWe agree that this paragraph should be renamed, it would be more accurate and informative to specifically mention that this paragraph corresponds to **surrogate losses in scalar minimization**.\\n\\n\\n> The PL condition is not properly defined in the main text. It could be defined close to Line 345 or at least in the appendix.\\n\\nThanks for this comment we will add the PL definition in the appendix.\"}", "{\"title\": \"Author Response\", \"comment\": \"We are pleased to hear that you found our paper well-written and our contributions significant. We are hoping that our work opens the door to new methods in solving difficult modern VI problems and at the same time provides new insights to existing approaches.\\n \\nThank you for your comments and feedback. We address your comments and questions below in detail.\\n\\n> The empirical finding of better optimization of the inner problem not leading to better optimization of the outer loop is very interesting but unfortunately not examined in more detail. Both a more in-depth experimental investigation and a theoretical justification for this effect could strongly improve the paper, see also the questions below.\\n\\nThank you for this comment. We indeed found this surprising and interesting. However, we did not see such behaviour in our larger scale experiments where more inner loop iterations resulted in better performance (with respect to the outer loop). Similarly, for supervised learning [1] also observed that more inner steps improved performance. Therefore, this behaviour observed in our experiments may be due to the small dimensionality of the toy problems in Section 5. \\n\\nSince we believe this behaviour is both instance and optimizer specific (i.e. depending on the problem and the optimizer), we believe it to be out of the scope of this paper. Our current focus and approach is optimizer agnostic -- with any optimizer being used in the inner loop. However we agree that potential insight might be possible by considering a new descent like conditions (see below).\\n\\n> How does that tie in with the $\\\\alpha$-descent rule? If not a low loss value, what makes a \\\"good\\\" solution to the inner problem that improves convergence of the outer loop? \\n\\nSince our approach has been to consider the inner loop as a black-box and we do not use any knowledge of the algorithm inside to derive our convergence guarantees, it is difficult to pinpoint how exactly this phenomenon corresponds to $\\\\alpha$-descent. It is possible that in these examples the methods are related to $\\\\alpha$-descent but with respect to a different method that is faster than the projected gradient method. Alternatively, as you mention, $\\\\alpha$-descent may not be the best way to measure a good solution of the inner loop. We believe however that the $\\\\alpha$-descent condition is a natural choice and an important starting point since it corresponds to the \\\"gap\\\" of the scalar loss in the inner loop that is often studied in convex or non-convex optimization. In addition, we believe that the $\\\\alpha$-descent condition is an important step in analyzing surrogate methods since existing approaches require bounding the errors or gap by a small constant (e.g. [1]) instead of using a relative descent condition like ours. In comparison we believe our approach is a more accurate model of what is used in practice e.g. a moderate amount of gradient steps without forcing a small loss value in each inner loop (see more details below).\\n\\n> Has this effect also been observed in the scalar minimization case? If yes, how does it compare?\\n\\n[1] did not observe this behaviour in supervised learning. [2] looked at surrogate losses for policy gradient methods in RL and showed that more gradient steps in the inner-loop can hurt performance in some environments. However, we do not know if the surrogate loss value was smaller when more steps were used since this statistic was not reported. Furthermore, the surrogate loss is stochastic and is biased, an issue not present in our small scale deterministic setting. Therefore we don't know if (1) more steps is actually better minimizing the loss like in our case or (2) if this is an issue due to bias. For more details about this bias see [1] or our Section 5.2.2.\\n\\n> The authors write: \\\"In general may not be zero and so this condition cannot be verified directly, however, this condition can often be met via first-order methods for a fixed number of steps or can be approximated with.\\\" (line 171) I am wondering how practical $\\\\alpha$-descent condition is, is it possible to verify this condition in the presented experiments in section 5?\\n\\nIn general without knowing $\\\\ell_t^\\\\ast$ it is not possible to verify the $\\\\alpha$-descent condition directly. However, in some cases a fixed number of GD steps is sufficient to guarantee $\\\\alpha$-descent. In Section 4 we provide some discussion on when a fixed number of inner loop gradient descent steps can be used to satisfy any $\\\\alpha$-descent condition. In practice, surrogate methods are used with gradient descent and tuned with a fixed *small* number of inner loop steps (e.g. [1,2]). Currently, there are no other works or analysis even in the scalar minimization case that explain why a small number of gradient descent should suffice.\\nWe believe the $\\\\alpha$-descent condition in this regard to be very practical as it does not require full optimization of the inner loop like other approaches (e.g. [1]).\"}", "{\"title\": \"Author Response\", \"comment\": \"We are happy to hear that you found our paper interesting, well-written and our contribution both original and nontrivial. We also appreciate the detailed questions and potential directions for future work. We are hoping to set a foundation for which others can build better algorithms for difficult VI problems, and are excited to see many directions that we have not considered.\\n\\n\\n> I think it's very interesting that sometimes more inner loop iterations damage the performance as a whole! (I once did some empirics for boosting weak learners in RL, where I saw a similar thing that I attributed to overfitting...). Do you have any explanations/intuitions for why this could happen (ie is it a general phenomenon in surrogate learning, or an artifact of these small-scale game experiments)?\\n\\nThank you for this comment, reviewer LPnR also highlighted this observation. We also found it surprising and believe it to be important to investigate further as future work. As we mentioned to reviewer LPnR, we believe this behaviour to highly depend on the problem and might be an artifact of our small dimensional toy problems. Given our current black box treatment of the inner loop it is difficult to understand theorertically from our analysis why such a phenomenon would occur. As mentioned by reviewer LPnR, it is possible a different descent condition could explain this behaviour. On the other hand, it is also possible that the performance is explainable via the $\\\\alpha$-descent condition but with respect to a different method that is faster than the projected gradient method. \\n\\n>do you suspect that there can be any theoretical insight/proofs on synthetic settings for how surrogate losses get a provable advantage in terms of number of diverse samples? Certainly one can make claims about decreasing the # of outer loop iterations, but I would be very interested if the extra regularity of your simple surrogate loss trajectory (ie the GD trajectory on $z_t$) can manifest as more efficient exploration?\\n\\nIn our experiments we focused on value prediction i.e. learning a value function. Where the behavioural policy used to collect the trajectories is fixed. In this case the methods used would not change the trajectories. There is some mention in [2] on how surrogate losses can affect exploration in policy gradient methods, however, in general this seems to be an open question.\\n\\n> would it be possible for you to do GAN training?\\n\\nWe think GANs are an interesting direction for future work. However, from our understanding, most GAN losses do not admit a tractable hidden monotone structure (i.e. a hidden convex-concave game). Let us provide more details. Given a disciminator $D_w$ and a generator $G_\\\\theta$, the standard GAN minimax loss is:\\n$$\\\\min_\\\\theta \\\\max_w E_{x\\\\sim p_{data}(x)}\\\\log D_w(x) + E_{z\\\\sim p_z(z)}\\\\log (1- D_w(G_\\\\theta(x))$$\\nwhich is concave with respect to the discriminator $D$ but **not convex** with respect to the generator function $G$ but can be seen as a convex-concave minimax loss with respect the the generated distribution $p_G$:\\n$$ \\\\min_w \\\\max_\\\\theta E_{x\\\\sim p_{data}(x)}\\\\log D_w(x) +E_{x'\\\\sim p_{G_\\\\theta}(x')}\\\\log (1- D_w(x')\\n$$\\nHowever, we cannot compute $\\\\nabla_\\\\theta E_{x'\\\\sim p_{G_\\\\theta}(x')}\\\\log (1- D_w(x'))$ easily with the standard reinforce trick because it require to compute $\\\\nabla_\\\\theta \\\\log p_{G_\\\\theta}(x')$ which is not tractable for most GAN as they do not have explicit density function (which is considered to be one of their important features: https://arxiv.org/pdf/1701.00160). Thus, developping GANs that allow for a tractable surogate-based optimization leveraging their hidden convex-concave formulation is outside of the scope of this paper. Instead we decided to focus on RL applications. \\n\\n\\nMeanwhile, minimizing projected Bellman error was shown by Bersekas [1] to be a hidden smooth and strongly monotone problem, the exact setting we study in our paper. Additionally, given the constraints of a conference paper, we believe it was important to share Bertsekas' VI perspective of projected Bellman error to the deept RL community and show:\\n1. that it naturally allows for a surrogate approach similar to standard policy gradient methods like PPO [3]\\n2. draw connections to our perspective by showing it as a special case of the PHGD method and more generally using Gauss-Newton as an inner-loop optimizer. Note that the surrogate interpretations of this method as given by equation (13) is novel.\"}", "{\"title\": \"Author Response\", \"comment\": \"We would like to thank you again for your detailed comments and feedback. We believe our recent edits will improve the clarity of the paper and our contribution overall.\\n\\n## Conditions in Theorem 3.2\\n\\n> Theorem 3.2 mentions at the start \\\"there exist a stepsize $\\\\eta$\\\" so the condition holding for all $\\\\eta$ is misleading. Moreover, the proposed condition seems never satisfied: for every $\\\\alpha, C, p > 0$ there exist $\\\\eta >0 $ such that $\\\\alpha > C\\\\eta^p $ , for example if we take $\\\\eta = (\\\\alpha/(2C))^{1/p}$, then $C\\\\eta^p = \\\\alpha/2 <\\\\alpha$.\\n> \\n> From looking at the proof it seems that $\\\\eta$ and $\\\\alpha$ are linked are linked and should satisfy some conditions depending on the problem, why do you not directly state those conditions in the theorem statement? \\n\\nYou are absolutely correct when you wrote that $\\\\eta$ and $\\\\alpha$ were linked by the condition \\n$$\\n1-2\\\\eta(\\\\mu-\\\\alpha L) + (1+\\\\alpha^2)\\\\eta^2 L^2 <1 \\n$$ we will state that in the theorem instead. The goal of condition 1) (or 2)) was to give a some sufficent condition for which the equation above was holding. We agree with the reviewer suggestions regarding how to improve the presentation of Theorem 3.2. which was updated as follows:\\n\\n**Theorem 3.2**: Let Assumption 3.1 hold and let $\\\\(z_t =g(\\\\theta_t)\\\\)$ be the iterates produced by Algorithm 1. If $\\\\alpha$ and $\\\\eta$ are picked such that,\\n$$\\n\\\\rho:= 1-2\\\\eta(\\\\mu-\\\\alpha L) + (1+\\\\alpha^2)\\\\eta^2 L^2 <1 \\n$$ then, $z_t$ converge linearly to the solution $z_\\\\ast$ at a rate $O(\\\\rho^t)$. In particular, if $\\\\alpha = \\\\frac{\\\\mu}{2L}$ and $\\\\eta = \\\\frac{2\\\\mu}{5L^2}$ we have $\\\\rho \\\\leq 1 - \\\\frac{\\\\mu^2}{5L^2}$.\\n\\n\\n## Bounded assumption of Sakos et al\\n\\n> Could you include this statement in the manuscript? I think this is subtle, while should be clear to readers.\\n\\nWe have added the mentioned statement as a footnote on page 7.\\n\\n## Deep RL Experiments\\n\\n> Thank you for providing additional experiments. However, as you also mention, this setup looks very artificial and 2 layers are already enough. I would have preferred more challenging problems really taking advantage of the larger networks, sorry for not having specified this in the review. This part remans still a minor weakness of the work.\\n\\nThank you for this clarification. We would like to emphasize that our main contribution lies in providing a theoretical framework for analyzing convergence in variational inequality problems with hidden structure. Based on this framework, we proposed a novel method for value prediction tasks in deep RL.\\n\\nWe do not believe the current setup to be artificial, Mujoco is a well-established benchmark for deep RL tasks, used in RL papers published at top-tier conferences including ICLR [1,2,3]. Additionally, our results on these tasks demonstrate significant performance improvements, particularly in terms of data efficiency and runtime, which we believe are valuable to the RL community.\\n\\nWe appreciate your concern on scalability, however. We believe with our 16 layer experiment we have shown our approach to be scalable to much larger networks (like we have emphasized in the paper). We acknowledge that it will be interesting to extend our methods to more challenging tasks that require larger networks (e.g. Estimation of the Q function of a Chess engine [4]) but we believe it not reasonnably achieveable within the timeframe of the discussion period and is outside of the scope of this paper. However, we plan to work on such ideas and applications as a follow-up.\\n\\n\\n[1] Manan Tomar, Lior Shani, Yonathan Efroni, and Mohammad Ghavamzadeh. Mirror descent policy optimization. In International Conference on Learning Representations, 2022.\\n\\n[2] Vaswani, S., Bachem, O., Totaro, S., M\\u00fcller, R., Garg, S., Geist, M., ... & Le Roux, N. (2022, May). A general class of surrogate functions for stable and efficient reinforcement learning. In International Conference on Artificial Intelligence and Statistics (pp. 8619-8649). PMLR.\\n\\n[3] Fujimoto, S., Hoof, H., & Meger, D. (2018, July). Addressing function approximation error in actor-critic methods. In International conference on machine learning (pp. 1587-1596). PMLR.\\n\\n[4] Farebrother, Jesse, et al. \\\"Stop regressing: Training value functions via classification for scalable deep rl.\\\" ICML 2024.\"}", "{\"summary\": \"This paper introduces a new method for optimizing variational inequalities with hidden structure by optimizing a series of surrogate losses, thereby extending previous methods for optimizing scalar loss functions. The authors provide a new $\\\\alpha$-descent condition on the sequence of inner surrogate optimization problems which is used to derive linear convergence rates for the outer optimziation in the deterministic and unconstrained stochastic setting. Specific choices of optimizer for the inner surrogate loss are shown to generalize previous works. Additionally, the authors provide conditions under which linear convergence is achieved.\\n\\nExperimentally, the method is tested on optimizing min-max games and projected Bellman error. In the min-max setting different variants of the method are compared, showing that the choice of the inner optimizer matters by improving on the special cases treated in previous work. In the RL setting, the surrogate loss perspective is connected to computationally expensive preconditioning methods which is shown to be approximated in the linear case via the presented iterative scheme. In the non-linear case the policy evaluation problem is tackled for two mujoco environments, where different versions of the method are shown to improve over the special case of TD(0) in terms of wall-clock time and sample efficiency.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The extension of iterative surrogate optimization from the scalar to the variational inequality case is a significant contribution.\", \"Paper offers a rigorous theoretical analysis.\", \"Strong performance of the method compared to the TD0 baseline.\", \"Well-written and relatively easy to follow.\"], \"weaknesses\": [\"The empirical finding of better optimization of the inner problem not leading to better optimization of the outer loop is very interesting but unfortunately not examined in more detail. Both a more in-depth experimental investigation and a theoretical justification for this effect could strongly improve the paper, see also the questions below.\"], \"questions\": [\"It is very interesting that better convergence of the inner loop does not necessarily translate to better convergence of the outer loop, i.e. more iterations are not necessarily useful (e.g. LM, Sur-GD, GN, Gig 1 & 2). Is there a theoretical justification? How does that tie in with the $\\\\alpha$-descent rule? If not a low loss value, what makes a \\\"good\\\" solution to the inner problem that improves convergence of the outer loop? Has this effect also been observed in the scalar minimization case? If yes, how does it compare?\", \"The authors write: \\\"In general $\\\\ell_t^*$ may not be zero and so this condition cannot be verified directly, however, this condition can often be met via first-order methods for a fixed number of steps or can be approximated with $\\\\ell_t^*=0$.\\\" (line 171) I am wondering how practical $\\\\alpha$-descent condition is, is it possible to verify this condition in the presented experiments in section 5?\"], \"style\": [\"Fig 1: It is hard to tell the methods apart, maybe use different line styles.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"comment\": \"Thank you for the response. I am still dubious about some parts of the submission.\\n\\n>What we meant exactly by this condition is that $\\\\alpha$ can be made smaller by making $\\\\eta$ smaller. That is $\\\\exists C, p > 0$ such that $\\\\forall \\\\eta > 0$ it holds $\\\\alpha < C\\\\eta^p$.\\n\\nTheorem 3.2 mentions at the start \\\"there exist a stepsize $\\\\eta$\\\" so the condition holding for all $\\\\eta$ is misleading.\\nMoreover, the proposed condition seems never satisfied: for every $\\\\alpha, C, p > 0$ there exist $\\\\eta > 0$ such that $\\\\alpha > C \\\\eta^p$, for example if we take $\\\\eta = (\\\\alpha/(2C))^{1/p}$, then $C \\\\eta^p = \\\\alpha/2 < \\\\alpha$.\\n\\nFrom looking at the proof it seems that $\\\\eta$ and $\\\\alpha$ are linked and should satisfy some conditions depending on the problem, why do you not directly state those conditions in the theorem statement? Alternatively, I suggest you to remove the second condition, also because Line 830-836 in the proofs are not clear and seems that a step is missing: verifying that $\\\\eta$ is not so small to violate the inequality $\\\\alpha < C\\\\eta^p$ assumed at the start of the derivation.\\n\\n>The set of predictions $\\\\mathcal{Z}$ or as refered to by Sakos et al. the set of latent variables $\\\\mathcal{X}$ is assumed to be bounded in Lemma 4, their \\\"template inequality\\\". More precisely, in the proof of Lemma 4 the existence of a constant...\\n\\nCould you include this statement in the manuscript? I think this is subtle, while should be clear to readers.\\n\\n>For our Mujoco RL experiments we selected a standard network architecture (2 hidden layer MLP) [3,4,5]. Our intention was not to select a small model but to demonstrate our approach in a standard Mujoco setup. However, we have included an experiment with 16 layers in stead of 2 to demonstrate the scalability of our approach, see Figure 10 in the appendix. Since in RL the bottleneck is mostly due to environment interaction we see comparable runtimes to the 2 layer experiments. Additionally the performance is similar, as expected due two-layers being sufficient.\\n\\nThank you for providing additional experiments. However, as you also mention, this setup looks very artificial and 2 layers are already enough. I would have preferred more challenging problems really taking advantage of the larger networks, sorry for not having specified this in the review. This part remans still a minor weakness of the work.\"}", "{\"title\": \"Author Response to All Reviewers\", \"comment\": [\"We would like to thank all reviewers for their constructive feedback and comments that will improve the paper. Below we list changes we have made to the paper since the original submission.\", \"Improved readability of Figure 1 and 2 with larger text and lines\", \"Figure 3 changed to be averaged over many trajectories including error bars\", \"This allows for a more robust comparison of the different methods as opposed to using one trajectory like in [1]\", \"Added PL definition to the appendix\", \"Added deep RL experiment with a **16 layer** mlp in mujoco to demonstrate the scalability of our approach. See Figure 10.\"], \"references\": \"[1] Dimitri P Bertsekas. Projected equations, variational inequalities, and temporal difference methods. Lab. for Information and Decision Systems Report LIDS-P-2808, MIT, 2009.\"}", "{\"title\": \"Author Response Continued\", \"comment\": \"> Fig 1: It is hard to tell the methods apart, maybe use different line styles.\\n\\nWe agree that Figure 1 can be a bit difficult to read. We have updated the figure with larger text and believe it to more legible. Unfortunately, due the the many methods on the leftmost plot we find that different line styles are difficult to pick out and believe the new figure to be easier to read.\\n\\nReferences\\n\\n[1] Lavington, J.W., Vaswani, S., Babanezhad Harikandeh, R., Schmidt, M. &amp; Le Roux, N.. (2023). Target-based Surrogates for Stochastic Optimization. <i>Proceedings of the 40th International Conference on Machine Learning</i>, in <i>Proceedings of Machine Learning Research</i> 202:18614-18651 Available from https://proceedings.mlr.press/v202/lavington23a.html.\\n\\n[2] Sharan Vaswani, Olivier Bachem, Simone Totaro, Robert M\\u00fcller, Shivam Garg, Matthieu Geist, Marlos C. Machado, Pablo Samuel Castro, Nicolas Le Roux Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, PMLR 151:8619-8649, 2022.\"}", "{\"title\": \"Author Response Continued\", \"comment\": \"References:\\n\\n[1] Korpelevich, G. M.: The extragradient method for finding saddle points and other problems, Matecon 12 (1976), 747\\u2013756.\\n\\n[2] Mikhail V Solodov and Benar F Svaiter. A hybrid approximate extragradient\\u2013proximal point algo\\u0002rithm using the enlargement of a maximal monotone operator. Set-Valued Analysis, 7(4):323\\u2013345, 1999.\\n\\n[3] Haarnoja T, Zhou A, Abbeel P, Levine S. Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. InInternational conference on machine learning 2018 Jul 3 (pp. 1861-1870). PMLR.\\n\\n[4] Achiam J, Knight E, Abbeel P. Towards characterizing divergence in deep q-learning. arXiv preprint arXiv:1903.08894. 2019 Mar 21.\\n\\n[5] Achiam J. Benchmarking reinforcement learning algorithms. Spinning Up in Deep RL. OpenAI; Available from: https://spinningup.openai.com/en/latest/spinningup/bench.html. Accessed 2024 Nov 18.\"}", "{\"comment\": \"Thank you for your detailed response. My main concern which is the restriction of the assumptions still stays. However, regarding the reply and taking into account that other related works also have more or less the same restricted assumptions, I believe this direction of research is still in progress and -- therefore -- am willing to increase my overall score to 6 and the contribution score to 4.\"}", "{\"summary\": \"The paper proposes an algorithm using some type of surrogate approximation to solve variational inequality (VI) problems. Those problems can be seen as finding first-order stationary points of a constrained optimization problem, but probably the setting in this paper is more general than that since the vector-valued function F is not necessarily a gradient (e.g. max-min game). The main idea is that \\\"composite\\\" (between the model and the loss) optimization problems in machine learning normally exhibit some structure, e.g., the loss w.r.t. to the model's output is convex (but the whole optimization function is not convex w.r.t. the model's parameters), one can push the \\\"difficulty part\\\" relating to model into the constraint to make the objective function convex. The authors then design a sequence of surrogate functions to minimize this reformulation problem and show convergence under a condition called \\\"alpha-descent\\\". To minimize the surrogate functions, they employ classical methods like Gaussian-Newton or Levenbergh-Marquardt. Numerical experiments are performed for some toy min-max games and in the reinforcement learning context.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"4\", \"strengths\": \"The considered problem is important in the classical optimization context (i.e., constrained optimization, complementarity) and mordern ML where the loss is structured. The problem is also more general than minimizing a scalar loss usually showing up in supervised learning. The experiments show that the proposed method work fine in practice.\", \"weaknesses\": \"The paper is challenging to follow, particularly in its transition from problem (1) to the construction of the surrogate model, where additional discussion would be beneficial. The assumptions also seem overly restrictive. For instance, while assuming convexity of the loss with respect to the model's output is reasonable for most loss functions, the assumption that the constrained domain is convex feels unnecessarily limiting, even though the authors provide a few narrow examples. Furthermore, the alpha-descent condition (5) requires closer examination, as it appears to be a stringent requirement. Specifically, it requires a single constant alpha that holds uniformly across all t.\", \"questions\": \"(1) In the basic case of supervised learning with a scalar loss, can we expect the proposed method perform better than off-the-shelf optimizers that work directly in the parameter space, i.e., Adam?\\n\\n(2) The condition in the while loop of Algorithm 1 can not be verified. How could we let alpha be the user-defined parameter?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Author Response Continued\", \"comment\": \"> is there any understanding of how this framework assists/counteracts the implicit biases of either (1) optimization methods in the inner loop or (2) architectural choices in the model $g$?\\n\\nThank you for this question, we think it would be exciting future work to better understand the interplay between model choices and inner-loop optimizers. In our paper we outline two cases cases where the choice of $g$ gives us extra insight. In the linear case, we know that one-step of Gauss-Newton in the inner-loop (i.e PHGD) is sufficient since $\\\\alpha = 0$ (as noted by [1]). If the model $g$ is sufficiently large enough where $\\\\mathcal{Z}=\\\\mathbb{R}^n$ then $\\\\ell_t^\\\\ast = 0$ and $\\\\alpha$ can be easily monitored. It would be interesting to investigate as future work how certain architecture/ inner loop optimizer pairs interact for specific cases such as in the paper you shared.\", \"references\": \"[1] Dimitri P Bertsekas. Projected equations, variational inequalities, and temporal difference methods. Lab. for Information and Decision Systems Report LIDS-P-2808, MIT, 2009.\\n\\n[2] Sharan Vaswani, Olivier Bachem, Simone Totaro, Robert M\\u00fcller, Shivam Garg, Matthieu Geist, Marlos C. Machado, Pablo Samuel Castro, Nicolas Le Roux Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, PMLR 151:8619-8649, 2022.\\n\\n[3] John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347, 2017.\"}", "{\"summary\": \"In this paper, the authors introduce a surrogate-loss-based framework for optimization of variational inequality (VI) systems, such as min-max.\\n\\nThis paper is the first to extend the surrogate methodology to (strongly-monotone) VIs, where the inner loop calls a scalar optimization routine black-box. Importantly, they demonstrate with an elementary adversarial example that surrogate methods in these VIs are qualitatively more complex than surrogate methods in traditional scalar optimization (the inner loop progress needs to be strong enough to counteract the effects of $F$ to ensure outer convergence, as opposed to just $<1$ in scalar case). They show that (under sufficient inner loop optimization assumptions, quantified in the form of the $\\\\alpha$-descent condition) the overall VI optimization succeeds in deterministic and stochastic regimes, with rates matching what is to be expected of strongly-convex optimization (ie geometric convergence). Lastly, they observe that an existing VI optimization method (PHGD) can be seen as a particular choice of inner loop optimization routine, and they investigate the benefits and consequences of alternative choices of inner loop algorithm and number of steps. \\n\\nExperimentally, the authors test the surrogate method for VI optimization in previously-investigated small-scale games, as well as on value prediction in fairly substantial RL settings. They observe interesting consequences of certain choices of inner loop algorithm and number of steps, and they demonstrate the value of the surrogate framework. In the RL experiments they see a significant improvement in environment sample complexity due to multiple inner loop iterations -- this matches some related work, but importantly does not require complex learned surrogate reward signals!\\n\\nTo summarize, the paper introduces the surrogate framework (outer loop over outputs and inner loop over model parameters) to a class of VI problems, demonstrates the nontrivial result that scalar optimization can (under some assumptions) be used black-box as a subroutine, and empirically investigate the associated phenomena on tasks of increasing complexity. Very cool!\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"4\", \"strengths\": \"This paper has a strong originality in that it appears to be the first to extend this type of surrogate losses to VIs, and it does so in a way that provably takes advantage of hidden monotonicity/convexity structure. Importantly, this extension is nontrivial -- there is a simple and solid adversarial example the authors provide (Prop. 3.3) that shows a difficulty gap in comparison with surrogate losses in the scalar optimization case. I think there is the extra strength (though perhaps it isn't highlighted enough, see my \\\"Weaknesses\\\" section) that it appears the framework, broad proof techniques, and main takeaways seem rather robust to choice of inner optimization routine (i.e. robust to assumptions, rates of convergence, etc.). In my eyes, the authors have constructed a fairly general methodology to reduce VIs with hidden structure to scalar optimization in a way that directly leverages that hidden structure -- this is very cool, and in general such reduction results tend to be quite powerful (such as online-to-batch reduction, nonconvex-to-convex reduction such as Algorithm 3 in https://arxiv.org/pdf/1806.02958, etc).\\n\\nThe analysis is also quite clear, proceeding along similar lines to many optimization papers and doing a fantastic job of contextualizing results, definitions, and assumptions in prior works. In particular, Section 4 (before subsection 4.1) skillfully highlights the flexibility of choices of inner loop optimizers in an organized way, noting equivalence to prior methods (such as PHGD) where applicable. This is a good transition to the experiments, which compare different setups of the inner loop routine in various minimax and RL value-prediction environments. Overall, I think this presentation is clear, the assumptions are obvious, and the experimental apparatus seems right for the general framework (though it would have been cool to see a slightly larger-scale experiment, and I think GAN training is the perfect low-hanging fruit if the authors have the resources!). Lovely stuff :)\", \"weaknesses\": \"I think the main (and only, to be honest) weakness of this paper is a weakness of presentation -- in particular, I feel that (as outlined in the \\\"Strengths\\\" section of my review above), the main contribution of this paper is that it clarifies and formalizes a framework where black-box scalar non convex optimization guarantees can be bootstrapped up to hidden-structure VI guarantees. However, at many points I felt that the presentation did not highlight this strongly enough, and instead chose to focus on particular rates/modes of convergence and substantiating particular assumptions.\\n\\nTo be specific, I would argue that the $\\\\alpha$-descent condition for inner loop progress is a bit of a red herring. As you mention, such convergence can only be shown in specific situations under the usual non convex optimization assumptions (PL condition, for example), which can often be difficult to justify. However, I feel that it's even unnecessary for you to justify it! It seems to me that, for example, Lemma A.2 and Prop A.3 would go through (perhaps with significantly more work) for weaker/more exotic types of inner loop guarantee -- the vibe of the proofs is moreso that (strongly-monotone) VIs allow you to push through the classic optimization proofs (the [Sakos '24] paper offers a similar takeaway in terms of bootstrapping convex VI guarantees up to hidden-convex VI guarantees, see their statements on p. 11). I bet there is a way to turn this into a strength of your paper: maybe something more like \\\"we prove things under the $\\\\alpha$-descent condition for clarity, but our meta-point is a more general reduction from VI optimization to scalar optimization via surrogate losses\\\". I am not recommending you to do the analysis under all kinds of crazy inner loop guarantees, but instead to reweight the presentation a bit to highlight the robustness to inner loop method.\\n\\nI will say that the $\\\\alpha$-descent setting is a fantastic choice of inner loop guarantee to demonstrate the difficulty gap between scalar vs VI surrogate methods; it makes the presentation of the adversarial example very clear. However, I would have liked to see it used more as a presentation tool/particular instantiation of a more general phenomenon, whereas it often felt like you were keeping it around as a core part of your results. If the impact of this condition was qualitatively different in the VI setting than in scalar surrogate optimization then that would be one thing, but I am unsure of this (note: I am not too familiar with this style of surrogate losses via hidden convexity/monotonicity -- if I am wrong about this perhaps a toy example exemplifying the difference would be cool!). \\n\\nTo really hammer this point home (sorry, but I don't really see any other weaknesses to write about), I feel like over-indexing on this particular criterion forces you to get stuck in the muck of justifying assumptions such as spectrally bounding input-output Jacobians of neural networks -- to some this may be a losing battle (models on complex, high-dim data will be likely to disregard features dynamically and hopefully learn lower-rank representations), but one I don't think you need to be fighting! Certain hypothesis classes/optimizers/datasets/etc will have different choices of inner loop routine that make sense, and the beauty of what you've shown here is that surrogate methods in VIs appear flexible to these choices. The language of reductions feels much more natural for such a result: I give you a VI problem with hidden monotone structure, you reduce it to a sequence of scalar non convex problems, and the practitioner/domain expert figures out the right inner loop soup (choosing between first-order or second-order methods, bias-variance tradeoffs, etc) to make it work. \\n\\nTo sum up, it is my opinion that if you are going to use non convex optimization as a black-box for the inner loop, treat it like a black-box (not just in terms of whether to use ADAM as the optimizer, but even in a broader sense). Aside from this (and some questions that I put in the \\\"Questions\\\" section), I have nothing else to say but awesome paper!\", \"questions\": \"1. I think it's very interesting that sometimes more inner loop iterations damage the performance as a whole! (I once did some empirics for boosting weak learners in RL, where I saw a similar thing that I attributed to overfitting...). Do you have any explanations/intuitions for why this could happen (ie is it a general phenomenon in surrogate learning, or an artifact of these small-scale game experiments)?\\n\\n2. The results on improved sample complexity in the RL value prediction tasks are very compelling in my opinion! I think it fits neatly into an overarching philosophy that blind RL is wasteful (in terms of samples/environment interactions), and that some form of guidance really helps. There are whole fields (search goal-conditioned RL, contrastive RL, etc) that attempt to figure out how to learn the right flavor of guidance, and it seems to me that your form of (not-learned!) surrogate losses can be seen as a particularly simple form of this. From your work and the related literature (which you know 1000x better than I), do you suspect that there can be any theoretical insight/proofs on synthetic settings for how surrogate losses get a provable advantage in terms of number of diverse samples? Certainly one can make claims about decreasing the # of outer loop iterations, but I would be very interested if the extra regularity of your simple surrogate loss trajectory (ie the GD trajectory on $z_t$) can manifest as more efficient exploration?\\n\\n3. Probably not a valuable question, but I have to ask: would it be possible for you to do GAN training? It would be convincing to deep learning practitioners and theorists alike!\\n\\n3. This is more a question of personal interest regarding this style of surrogate losses (i.e. ones where you take a step in output space, use inner loop to make the model catch up, repeat) and perhaps not specific to VIs, but here goes: *is there any understanding of how this framework assists/counteracts the implicit biases of either (1) optimization methods in the inner loop or (2) architectural choices in the model $g$?* I ask because, particularly in deep learning theory, there is often the vibe that the natural dynamics induced by such design parameters actually can help (see this paper https://arxiv.org/pdf/1802.06509 for a cool result on GD acceleration caused by model depth, for example). I could imagine some settings where the rigidity of the outer loop dynamics on $z_t$ prevent these complicated phenomena (for example, in the linked paper I suspect surrogate losses could prevent the acceleration for adversarial choices of $F$). Conversely, I can certainly imagine settings where the structure of the outer loop drives the optimization more usefully, in a similar fashion to how surrogate rewards in RL help orient an agent in a sparse-reward environment (see Question 2). Is there any understanding of this tradeoff, and perhaps more importantly do you imagine any differences to this tradeoff in the VI setting?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N/A\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
4YzVF9isgD
HyperFace: Generating Synthetic Face Recognition Datasets by Exploring Face Embedding Hypersphere
[ "Hatef Otroshi Shahreza", "Sébastien Marcel" ]
Face recognition datasets are often collected by crawling Internet and without individuals' consents, raising ethical and privacy concerns. Generating synthetic datasets for training face recognition models has emerged as a promising alternative. However, the generation of synthetic datasets remains challenging as it entails adequate inter-class and intra-class variations. While advances in generative models have made it easier to increase intra-class variations in face datasets (such as pose, illumination, etc.), generating sufficient inter-class variation is still a difficult task. In this paper, we formulate the dataset generation as a packing problem on the embedding space (represented on a hypersphere) of a face recognition model and propose a new synthetic dataset generation approach, called HyperFace. We formalize our packing problem as an optimization problem and solve it with a gradient descent-based approach. Then, we use a conditional face generator model to synthesize face images from the optimized embeddings. We use our generated datasets to train face recognition models and evaluate the trained models on several benchmarking real datasets. Our experimental results show that models trained with HyperFace achieve state-of-the-art performance in training face recognition using synthetic datasets. Project page: https://www.idiap.ch/paper/hyperface
[ "Face Recognition", "Hypersphere Optimization", "Privacy", "Synthetic Data" ]
Accept (Poster)
https://openreview.net/pdf?id=4YzVF9isgD
https://openreview.net/forum?id=4YzVF9isgD
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yc32AriT4m", "wGudiOHYkG", "qkSSfbqeSO", "oYSW2hwWZx", "gsrnLfyOjU", "gSkFC2rkKV", "eHEDQqbMjB", "dnC9ORv02b", "c27Hq98Cb9", "ZNrvKWH0JH", "XFdeyN6JQs", "TIAuEJVoi3", "QM7aRqUJER", "Py6eEyRxrI", "OnZMDCXuMb", "Nd9JUyukjj", "M2cztPqouA", "LneYLeeTru", "K5IW3yVtmV", "IsOIyLwWhj", "HFY9pLrEz1", "GmY9TcCU5h", "EdP5vWsi4x", "D0JkhImCQ3", "94H1uNDgFT", "3nBRikhyii" ], "note_type": [ "official_comment", "official_review", "official_review", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment" ], "note_created": [ 1732629613012, 1730636981488, 1730301358138, 1734443123537, 1732183899504, 1732629012955, 1732612069042, 1732435625991, 1732183968595, 1732183990944, 1732183289328, 1732286212178, 1732629386666, 1732599627293, 1737524286135, 1732629456441, 1732183662560, 1732183871261, 1732183059069, 1732182884994, 1732628705663, 1730726925527, 1732183637419, 1730042826918, 1732555277792, 1732629488350 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission13859/Authors" ], [ "ICLR.cc/2025/Conference/Submission13859/Reviewer_UATk" ], [ "ICLR.cc/2025/Conference/Submission13859/Reviewer_YfZL" ], [ "ICLR.cc/2025/Conference/Submission13859/Area_Chair_fUpi" ], [ "ICLR.cc/2025/Conference/Submission13859/Authors" ], [ "ICLR.cc/2025/Conference/Submission13859/Authors" ], [ "ICLR.cc/2025/Conference/Submission13859/Reviewer_YfZL" ], [ "ICLR.cc/2025/Conference/Submission13859/Authors" ], [ "ICLR.cc/2025/Conference/Submission13859/Authors" ], [ "ICLR.cc/2025/Conference/Submission13859/Authors" ], [ "ICLR.cc/2025/Conference/Submission13859/Authors" ], [ "ICLR.cc/2025/Conference/Submission13859/Reviewer_RHfp" ], [ "ICLR.cc/2025/Conference/Submission13859/Authors" ], [ "ICLR.cc/2025/Conference/Submission13859/Reviewer_khYx" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission13859/Authors" ], [ "ICLR.cc/2025/Conference/Submission13859/Authors" ], [ "ICLR.cc/2025/Conference/Submission13859/Authors" ], [ "ICLR.cc/2025/Conference/Submission13859/Authors" ], [ "ICLR.cc/2025/Conference/Submission13859/Authors" ], [ "ICLR.cc/2025/Conference/Submission13859/Authors" ], [ "ICLR.cc/2025/Conference/Submission13859/Reviewer_khYx" ], [ "ICLR.cc/2025/Conference/Submission13859/Authors" ], [ "ICLR.cc/2025/Conference/Submission13859/Reviewer_RHfp" ], [ "ICLR.cc/2025/Conference/Submission13859/Reviewer_RHfp" ], [ "ICLR.cc/2025/Conference/Submission13859/Authors" ] ], "structured_content_str": [ "{\"title\": \"Authors Reply to Reviewer Feedback\", \"comment\": \"We sincerely thank the reviewer's feedback. We appreciate the reviewer for their time in reviewing our paper and rebuttal as well as their comments which helped us improve the quality of our paper. We are very happy that our additional experiments and analyses could *satisfactorily answer all of [the reviewer's] questions and concerns*. We appreciate it if the reviewer can kindly consider raising their rating if appropriate.\"}", "{\"summary\": \"Interesting paper that proposes some embedding optimisation in the latent space of a pretrained face recognition model. The optimised embeddings are used for generating facial images using a recently proposed generative model, and then the generated images for training a face recognition model. I liked the novelty and simplicity of the proposed approach yet there are a few issues that possibly limit the impact of the proposed work. See my questions below.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"2\", \"strengths\": [\"Interesting idea to perform the optimization in the latent space of a discriminatively train face recognition model\", \"Well written paper, easy to read and understand\", \"Decent experiments although lacking in some respects\"], \"weaknesses\": \"It could be that the method has significant limitations in terms of scaling the number of images that can be generated. The impact of the work has not be fully demonstrated. See also my questions below.\", \"questions\": \"1. Are both reference embeddings and gallery embeddings generated from StyleGan? In this case the only difference between them is that the Gallery embeddings are not updated during optimisation?\\n2. Are all methods in table 1 trained in the same way using the same face recognition model and training pipeline?\\n3. Fair comparison in table 1 should use 50 images per identity for your method\\n4. It\\u2019s important to compare against SOTA (e.g. DCFace) at scale (i.e. increasing the number of identities). Specifically, table 3 should not be just an ablation study but you need to show that your method scales favorably and/or outperforms SOTA as the number of training images increases. \\n5. In general, how the method scales has been poorly studied (there\\u2019s only 1 result in table 3). Scaling Dataset Generation section discusses computational issues that may arise from scaling the dataset but does not provide concrete numbers (e.g. a figure showing training time vs dataset size), conclusions or practical solutions (i.e. a solution is proposed but not put in practice)\\n6. Baselines: what about direct comparison with arc2face? Since they don\\u2019t have to solve your optimisation problem, they can generate a lot more images for training\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N/A\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper propose an interesting solution for synthetic face recognition, i.e. optimizing the hyperspace for generation. The solution is to treat the face generation as a packing problem on the embedding space. The optimization goal is to find feature embedding that is of high inter-class variation. Finally this paper adopt a pretrained generation method to generate the dataset.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"1. This paper is good in writing and has a solid mathematical formulation.\", \"weaknesses\": \"1. [Major] I don't see the motivation to convert the face generation problem as a packing problem on the embedding space from the storytelling, Please provide some related empirical/ theoretical works regarding why the packing problem could be of use for SFR.\\n\\n2. [Major] This paper adopts Arc2Face for final image generation. However, (1) The ablation study doesn't show the advance of directly sending random feature embedding (each embedding is ensured by restricting the similarity below a certain threshold, e.g. 0.3) to Arc2Face; (2) The comparison with Arc2Face is missing in Table 1, additionally the experiment is marginal better than DCFace. The average performance is 89.876 which is similar to DCFace and dramatically lower than Arc2Face. \\n\\n3. [Major] In Section 'Solving the HyperFace Optimization', the authors choose AdamW for the optimization solution. However, the other alternative optimization methods are not specified and compared in this paper.\\n\\n4. Another concern is that the proposed method generates more images(640k) to produce similar performance to DCFace (500k).\\n\\n5. large intra-class variations can not be observed in the visualization section.\\n\\n6. [Minor] Notation is not specified in fig 1.2. Please provide more description for the reader to understand the mathematical formulation and the whole generation process. For example, what does reference embedding stand for, I understood it only when I saw the 'Image Generation' section. And what is X_{g}?\\n\\n7. Please give some detailed pseudo-code for the entire process(training/ generation) for the reader to understand the method.\", \"questions\": \"Please see the weakness.\\n\\nIf the authors address the concerns well, I am happy to raise my score.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"The paper introduces a method for generating synthetic face datasets by optimizing identity embeddings on a hypersphere using gradient descent, followed by image synthesis with a generative model. It demonstrates improved inter-class variation and competitive performance on face recognition benchmarks. A well-defined optimization framework, extensive experiments, and scalability analysis are the main strengths of this paper. There are also several limitations of this paper, including limited exploration of intra-class variation, insufficient empirical/theoretical insights into optimization, and weaker performance on age-related benchmarks. Despite these gaps, the paper makes a meaningful contribution to synthetic data generation for face recognition, supporting its merit for acceptance with further refinements.\", \"additional_comments_on_reviewer_discussion\": \"During the rebuttal, reviewers raised concerns about scalability, intra-class variation, and theoretical analysis of the optimization method. The authors addressed these by introducing a larger dataset with 50k identities, implementing stochastic optimization to reduce complexity, and adding ablation studies, visualizations, and theoretical insights. While Reviewer khYx acknowledged the improved clarity and experiments, Reviewer YfZL and Reviewer RHfp highlighted the need for deeper exploration of intra-class variation and optimization theory. The authors\\u2019 efforts demonstrated substantial improvements and a commitment to refining the work. While some concerns remain, the paper presents valuable contributions, supporting its potential for acceptance with further enhancements.\"}", "{\"title\": \"Authors Reply to Reviewer YfZL [Part 2/2]\", \"comment\": [\"**Reply to Weakness #3 [Major] (Optimization)**: Following the reviewer's suggestion, we conducted a new experiment and used other optimization techniques. We added the results to in Appendix E of the revised version of the paper. As the results in our new ablation study show, solving HyperFace optimization with different optimizers leads to comparable performance.\", \"**Reply to Weakness #4 (Number of Images)**: While by default, we generated 64 samples per identity, we reported an ablation study for different numbers of samples per identity, including 50 samples. Comparing the results reported in the ablation study with Table 1, we can see that still with 50 samples per identity, our method achieves comparable performance. Considering the reviewer's concern, we updated the results in Table 1 with 50 samples per identity (500k images). However, still with 500k images, the ranking of synthetic datasets over different benchmarks remains unchanged.\", \"**Reply to Weakness #5 (Visualization for Intra-class Variations)**: In Figure 1 of the paper and also Figure 4 of appendix, we illustrated sample face images from our dataset, including 8 samples for each subject to illustrate the intra-class variation. Following the reviewer's comment, in the revised version of the paper, we added 64 images of two synthetic identities in Appendix G to illustrate intra-class variations in our dataset.\", \"**Reply to Weakness #6 [Minor] (Caption of Fig 2)**: Figure 2 provides a general overview of our method. Following the reviewer's suggestion, we extended the caption and provided more details of our data generation in the caption of this figure.\", \"**Reply to Weakness #7 (Pseudo-code for the entire process)**: Following the reviewer's suggestion, we added a pseudo-code for the entire data generation process in Appendix F of the revised version of the paper.\", \"We hope we could adequately address the reviewer's concerns.\", \"In case the reviewer found our reply convincing, we appreciate it if the reviewer can increase their scores and rating.\", \"We are happy to continue the discussion if any part remains unclear.\"]}", "{\"title\": \"Authors Reply to Reviewer Feedback\", \"comment\": \"We thank the reviewer for their reply and their continued engagement in the discussion. We are happy that our reply could address the reviewer's concerns. Below, we tried to address the remaining concern of the reviewer:\\n\\nOur loss function in equation (1) represents a famous *open* problem, which is known as spherical code optimization or Tammes problem. \\nThe optimal solutions for the Tammes problem are studied for small dimensions and a small number of points. However, for large dimensions and a high number of points there is no closed-form solution for the Tammes problem. There are different approaches for solving this optimization problem (such as geometric optimization, numerical optimization, etc.) for large dimensions and a high number of points. However, for a large dimension (e.g., 512) and a *very* large number of points (e.g., 10k identities) solving this problem with geometric optimization or numerical optimization is computationally very expensive. Hence, we solve this problem with a gradient descent approach which allows us to solve the optimization with a reasonable computation resource. In Appendix A, we report the computation required to solve our optimization with our method and further reduce the computation with a stochastic optimization in Appendix B, where we demonstrated theoretically and empirically that stochastic optimization reduces the complexity while resulting in a comparable performance.\\n\\nBecause the Tammes problem is very difficult for large dimensions and a high number of points, an in-depth analysis of equation (1) requires extensive study. We would like to refer the reviewer to a very recent ICML 2024 workshop paper [B], which is focused only on the Tammes problem, and supports the reviewer's intuition of (near) uniform distribution of the (sub) optimum solutions. Given the sophistication and difficulty of the Tammes problem for high dimensions and the fact that our final loss function is indeed equation (2), we believe further analysis of equation (1) is out of the focus of this paper. However, to address the reviewer's concern we updated the paper and elaborated further on the previous studies on the Tammes problem in the second revised version of the paper.\\n\\n[B] Tokarchuk, et al. \\\"On the Matter of Embeddings Dispersion on Hyperspheres\\\", In ICML 2024 Workshop on Geometry-grounded Representation Learning and Generative Modeling, 2024. URL: https://openreview.net/pdf?id=yh4IjSjAhQ\\n\\nWe sincerely appreciate the reviewer's comments, both in the initial review and discussions, which helped us improve the quality of our paper. We are, of course, gladly open to further discussions if the reviewer still has any concerns.\"}", "{\"comment\": \"Thank the authors for their efforts in addressing my concerns. However, I still have the following issues, which prevent me from improving my rating:\\n\\n(1) 500K Data Setting:\\nThe average performance in the 500K data setting is 89.498, which is lower than DCFace's performance listed in the table (90.22). This discrepancy raises concerns about the effectiveness of the proposed approach under this setting.\\n\\n(2) Visualization:\\nThe proposed method still lacks sufficient age-related variation in its visualizations, which is an important aspect that remains unaddressed.\"}", "{\"title\": \"Authors Reply to Reviewer feedback\", \"comment\": \"We appreciate the reviewer for their engagement in the discussion. We are happy that our reply could address some of the initial concerns. Below, we tried to address the new points mentioned by the reviewer:\\n\\n- **Reply to Point #1 (Uniform Distribution):** As mentioned earlier in our **Reply to Weakness #1 (Uniform Distribution)**, our method does not only include optimizing equation (1), but we have an additional regularization term in our optimization in equation (2). **The regularization term in equation (2) prevents uniform distribution of points over the hypersphere**. In fact, the regularization term tries to keep the solution space for our optimization close to the manifold of face recognition embeddibgs on the hypersphere instead of the entire surface of the hypersphere. Therefore, **equation (2) cannot result in a uniform distribution over the entire hypersphere**. Our ablation study in Table 6 also shows that our regularization in equation (2) improves the recognition performance of the trained face recognition model.\\n\\n- **Reply to Point #2 (largest dataset):** Our dataset with 50k identities has 3.2M images, which is the largest dataset in terms of *number of images*. As mentioned by the reviewer, DigiFace has 110k identities but with 1.22M images. While we acknowledge that DigiFace has more *identities*, in our **Reply to Weakness #4 (Additional Experiment)** we stated that *our new generated dataset with 50,000 identities has already the largest `number of synthetic images`* (i.e., 3.2M). We should note that our datasets (with 50k identities and even 10k identities) far outperform DigiFace on all benchmarks. In Appendix C, we also provided a benchmark with the largest available version of all datasets, where our dataset achieves competitive performance with state-of-the-art in the literature. Please note that in the paper, we have not mentioned that our dataset is the largest dataset in the literature.\\n\\n- **Reply to Point #3 (Maximum Number of Unique Identities):** \\nWe would like to note that Figure 3 of reference [3] (i.e., DCFace paper) mentioned by the reviewer is on the capacity of face generator models (DiscoFaceGAN and unconditional DDPM). In fact **this figure does not provide an answer to the maximum number of unique identities that can be generated by DCFace**. In particular, for Figure 3 in the DCFace paper, Kim *et al.* [3] generated 10,000 images with two face generator models (DiscoFaceGAN and DDPM) and compared all the generated images. By changing the threshold of the face recognition model, they plotted Figure 3 in [3]. Therefore, even in the DCFace paper (CVPR 2023) the question of the *maximum number of unique identities* has not been addressed. While in Figure 3, a maximum of 10k identities are counted, Kim et al. [3] also published a version of the dataset with 60k identities. Therefore, the maximum number of identities that can be generated by DCFace is not evident in Figure 3 of the DCFace paper [3].\\nAs a matter of fact, the *maximum number of unique identities* is more studeied for the capacity of face generator models, such as [A].\\nHowever, we believe for answering to this question for synthetic datasets, we need to increase the number of identities in generated datasets and train face recognition with larger datasets. As reported in the paper, by increasing the number of identities, we still could not observe saturation in performance, and therefore we cannot ensure the maximum number of unique identities in our method. However, generating a larger dataset requires more computation and time, while our method with 50k identity already achieves competitive performance with state of the art.\\n\\n[A] Boddeti, et al. \\\"On the biometric capacity of generative face models\\\", IEEE International Joint Conference on Biometrics (IJCB), 2023.\\n\\nWe would be glad to continue the discussion if any part remains unclear.\"}", "{\"title\": \"Authors Reply to Reviewer RHfp [Part 1/2]\", \"comment\": \"We thank the reviewer for their time in reviewing our paper and for their valuable comments. We are happy that the reviewer found our paper *well-written and easy to follow*. We are also delighted for the reviewer's positive feedback on the *generalization across different validation sets* of our method with *promising satisfactory improvements on benchmark datasets*.\\nBelow, we tried our best to address the concerns raised by the reviewer:\\n\\n- **Reply to Weakness #1 (Uniform Distribution)**: While equation (1) is focused on optimization over the entire hypersphere (n-ball), the face recognition manifold does not necessarily cover the entire hypersphere. For this reason, we proposed a regularization in equation (2) which tries to keep our HyperFace optimization on the manifold of the face recognition using embeddings from embeddings from a gallery of face images. Therefore, we cannot theoretically study the distribution of points on the entire hypersphere, and instead we need to consider the distribution of points on the manifold of the face recognition over hypersphere, which is not trivial and requires accurate estimation of the manifold. \\n\\n\\n- **Reply to Weakness #2 (HyperFace Optmimization)**: Following the reviewer's suggestion, we added an ablation study in Appendix E and studied the case where random points are used as reference embeddings for generating dataset (i.e., without HyperFace optimization). As the results in our new experiment in Appendix E show, our HyperFace optimization results in a synthetic dataset with better quality which achieves superior performance.\\n\\n\\n- **Reply to Weakness #3 (Experiments and Discussions)**: We tried to include an extensive study on different parameters in our paper. Following the reviewer's comment we revised our experiments and further discussed our results. In the revised version of the paper, we carefully considered all experiments suggested by reviewers and expanded our analyses. We also explored the complexity of our dataset generation in Appendix A. We proposed a stochastic optimization, which reduces the complexity, and provided in-depth theoretical and experimental analyses in Appendix B. We also provided more experiments in the appendix of the revised version of the paper (Appendix C: synthetic datasets at scale, Appendix D: idenitity leakage and recognition performance, Appendix E: additional ablation study). In case any part remains unclear, we appreciate it if the reviewer can clarify the missing part and the required discussion.\\n\\n\\n- **Reply to Weakness #4 (Additional Experiment)**: Following the reviewer's suggestion, we added another experiment, in which we increased the number of identities to 50,000 in Table 3. The results indicate improvements in the performance of our model over different benchmarks. Therefore, we cannot conclude saturation on 50,000 identities, and further experiments are required. \\n\\n| # IDs | LFW | CPLFW | CALFW | CFP | AgeDB |\\n|---|:---:|:---:|:---:|:---:|:---:|\\n| 10k | 98.67 | 84.68 | 89.82 | 89.14 | 87.07 |\\n| 30k | 98.82 | 85.23 | 91.12 | 91.74 | 89.42 |\\n| 50k | 98.27 | 85.6 | 91.48 | 92.24 | 90.4 |\\n\\nWhile increasing the number of identities requires more computation time, we should note that our new generated dataset with 50,000 identities has already the largest number of synthetic images compared to all synthetic datasets in the literature in Table 1. We should note that we also added a new section in appendix (Appendix C), where we compared our method with largest synthetic datasets in the literature. Our dataset with 50k identity and 3.2M is larger than synthetic datasets compared from the literature and achieves competitive performance with previous large-scale synthetic datasets as reported in Appendix C.\\n\\n\\n- **Reply to Weakness #5 (Identity Leakage)**: We thank the reviewer for raising this point and suggesting this interesting experiment. Following the reviewer's suggestion, we conducted a new experiment in which we excluded images which have high similarity with real images. Then, we use the cleaned dataset to train a new face recognition model and benchmark the performance of the trained face recognition model. The results show that the new face recognition models achieve comparable recognition performance. We added a new section to Appendix D and discussed it in detail.\"}", "{\"title\": \"Authors Reply to Reviewer RHfp [Part 2/2]\", \"comment\": \"We think that three questions raised by the reviewer are potentially answered in our reply to the raised weaknesses:\\n- **Reply to Question #1**: Please see our reply to weakness #1 about uniform distribution and face recognition manifold which makes theoretical analysis difficult. We would like to note that in the revised version of the paper, we further explored the complexity of our optimization in Appendix A and proposed a stochastic optimization that reduces the complexity in Appendix B. We provide thorough theoretical and experimental analyses where our experimental results meet our theoretical predictions. \\n- **Reply to Question #2**: Please see our reply to weakness #2, #3, and #4. We added a larger dataset with 50k identities in Table 3 and also provided more experiments in the appendix of the revised version of the paper (Appendix C: synthetic datasets at scale, Appendix D: idenitity leakage and recognition performance, Appendix E: additional ablation study).\\n- **Reply to Question #3**: Please see our reply to weakness #5.\\n\\nWe hope we could adequately address the reviewer's concerns. \\nIn case the reviewer found our reply convincing, we appreciate it if the reviewer can increase their scores and rating.\\nWe are happy to continue the discussion if any part remains unclear.\"}", "{\"title\": \"Authors Reply to Reviewer khYx [Part 2/2]\", \"comment\": \"- **Reply to Question #1**: We provide an in-depth discussion about the computation requirement in Appendix A and further improvements in Appendix B of the revised version of the paper. In our initial submission, we discussed the complexity of our optimization as its limitation and mentioned how this can be improved. In the revised version of the paper, we further explain our approach to reduce the computation required and propose stochastic optimization for HyperFace. We theoretically prove that stochastic optimization leads to similar results with less computation. In addition, we report numerical results for stochastic optimization which meet our theoretical analyses.\\n\\n- **Reply to Question #2**: We thank the reviewer for raising this interesting question. To our knowledge, this experiment has not been explored in previous work on synthetic data for face recognition, and in fact requires more efforts to retrain the face generator model. We will consider this experiment in our future work.\\n\\n\\nWe hope we could adequately address the reviewer's concerns. \\nIn case reviewer found our reply convincing, we appreciate it if the reviewer can increase their scores and rating.\\nWe are happy to continue the discussion if any part remains unclear.\"}", "{\"title\": \"I maintain my rating.\", \"comment\": \"Thanks to the author for providing feedback on my comment. Some points in my initial review have yet to be addressed.\\n1- In lines 137-148 and Equation 1 of the original manuscript, the paper tries to maximize the minimum distance of the randomly selected pairs. Intuitively, pushing all features away from each other should indeed cause them to be roughly uniformly distributed. Please refer to the [2] for more details. \\n2- Please note that based on the authors' (50K identities), the proposed method is not the largest synthetic FR dataset. DigiFace [1], containing 1.22M images of 110K identities, is the largest public synthetic dataset for face recognition. \\n3- What is the maximum number of unique identities that the proposed method can produce? Please see Figure 3 in [3] for clarification.\\n\\n[1]Bae, Gwangbin, et al. \\\"Digiface-1m: 1 million digital face images for face recognition.\\\" Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. 2023.\\n[2]Wang, Tongzhou, and Phillip Isola. \\\"Understanding contrastive representation learning through alignment and uniformity on the hypersphere.\\\" International conference on machine learning. PMLR, 2020.\\n[3]Kim, Minchul, et al. \\\"Dcface: Synthetic face generation with dual condition diffusion model.\\\" Proceedings of the ieee/cvf conference on computer vision and pattern recognition. 2023.\"}", "{\"title\": \"Reminder to Reviewer UATk\", \"comment\": \"Dear Reviewer UATk,\\n\\nWe thank you for your review and valuable comments, which helped us improve the quality of our paper. We carefully considered all the points and tried our best to address the concerns raised by the reviewer in our reply and the revised version of the paper. While we still have not received any feedback on our reply from the respected reviewer, we are gladly open to further discussions.\"}", "{\"title\": \"Re: Authors' feedback\", \"comment\": \"I would like to thank the authors for their diligent efforts in addressing all of the reviewers' concerns and conducting additional experiments. The authors have satisfactorily answered all of my questions and concerns.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"Authors Reply to Reviewer Feedback [Part 1/2]\", \"comment\": \"We appreciate the reviewer for their engagement in the discussion. We are happy that our reply could address some of the initial concerns, including major concerns. Below, we tried to address the points mentioned by the reviewer in their new feedback:\\n\\n**Reply to Point #1 (Comparison):** Comparing our dataset with 500,000 images against DCFace in Table 1, we can observe that the face recognition model trained with our dataset achieves superior performance on 3 [out of 5] benchmarks. However, our method has inferior performance on AgeDB and CALFW (Cross-Age LFW), which leads to an overall drop in the average accuracy, as raised by the reviewer. However, both AgeDB and CALFW are focused on the evaluation of face recognition models for age variations. We further discuss the limitation of our method for age variations in our **Reply to Point #2 (Visualization: Age-related Variations). However, we would like to argue that comparing the average accuracies over all benchmarks may not lead to a fair comparison. As can be seen in benchmarks (e.g., in Table 1) the variation ranges of accuracies over different benchmark datasets is not constant. For example, differences in LFW is very competitive and it is difficult to improve the performance of LFW by 1%, while the variations are much larger for AgeDB. Therefore, if we consider average accuracy, we are inherently putting more weight on datasets with higher variations. For this reason, in several challenges on face recognition models, such as EFaR@IJCB2023, SDFR@FG2024, etc., the Borda Count has been for leaderboards, instead of average accuracy. The Borda Count considers the rank of each model on each benchmark separately, and then averages the points achieved by rankings over all datasets.\\n\\nWe would also like to highlight that even if we consider the average accuracy on all datasets, our method achieves the *best average accuracy* when compared with large synthetic datasets in the literature. In Appendix C, we compared face recognition models with larger versions of each dataset (which are publicly available), where our dataset with 50k identities and 3.2M images achieves competitive performance with literature. Meanwhile, if we calculate the average accuracies over all benchmarks (as reported in the following table) our dataset with 50k identities has the *highest average accuracy* compared to large synthetic datasets in the literature:\\n\\n\\n| Dataset Name | # IDs | # Images | Average Accuracy (%)|\\n|------------------------------------------------|---------|-----------|---------|\\n| SynFace [Qiu et al., 2021] | 10,000 | 999,994 | 69.53 |\\n| SFace [Boutros et al., 2022] | 10,572 | 1,885,877 | 79.04 |\\n| Syn-Multi-PIE [Colbois et al., 2021] | 10,000 | 180,000 | 63.13 |\\n| IDnet [Kolf et al., 2023] | 10,577 | 1,057,200 | 71.12 |\\n| ExFaceGAN [Boutros et al., 2023] | 10,000 | 599,944 | 69.46 |\\n| GANDiffFace [Melzi et al., 2023] | 10,080 | 543,893 | 79.84 |\\n| Langevin-Dispersion [Geissb\\u00fchler et al., 2024] | 10,000 | 650,000 | 77.79 |\\n| Langevin-DisCo [Geissb\\u00fchler et al., 2024] | 10,000 | 650,000 | 85.16 |\\n| Langevin-DisCo [Geissb\\u00fchler et al., 2024] | 30,000 | 1,950,000 | 90.31 |\\n| DigiFace-1M [Bae et al., 2023] | 109,999 | 1,219,995 | 76.97 |\\n| IDiff-Face (Uniform) [Boutros et al., 2023] | 10,049 | 502,450 | 87.67 |\\n| IDiff-Face (Two-Stage) [Boutros et al., 2023] | 10,050 | 502,500 | 85.85 |\\n| DCFace [Kim et al., 2023] | 10,000 | 500,000 | 90.22 |\\n| DCFace [Kim et al., 2023] | 60,000 | 1,200,000 | 91.45 |\\n| **HyperFace [Ours]** | 10,000 | 640,000 | 89.88 |\\n| **HyperFace [Ours]** | 50,000 | 3,200,000 | **91.60** |\", \"references\": [\"EFaR@IJCB2023 competition summary paper: \\\"EFaR 2023: Efficient face recognition competition\\\", In Proc. of the IEEE International Joint Conference on Biometrics (IJCB), 2023.\", \"SDFR@FG2024 competition summary paper: \\\"SDFR: Synthetic data for face recognition competition\\\", In Proc. of the IEEE 18th International Conference on Automatic Face and Gesture Recognition (FG), 2024.\"]}", "{\"title\": \"Reply to Reviewer UATk [Part 2/2]\", \"comment\": \"- **Reply to Question #5**: Following the reviewer's feedback, we provided an evaluation of the complexity of our method in appendix A of the revised version of the paper. We also provided further theoretical and experimental analyses for reducing complexity in our optimization in Appendix B. We propose to solve the optimization with mini-batches which significantly reduces the complexity of our optimization and allows scaling dataset generation process. In our initial submission we discussed the complexity of our optimization as its limitation for scaling and mentioned how this can be improved, and following the reviewer's suggestion we provided in-depth theoretical and experimental analyses in Appendix B of the revised version of the paper.\\n\\n- **Reply to Question #6**: For a fair comparison in our experiments for Table 1, we fixed all hyperparameters and trained face recognition models for all datasets with the same configuration (backbone, loss function, batch size, number of epochs, etc.). Therefore, we could only compare with methods that have publicly available datasets. Whereas the authors of Arc2Face have not published their synthetic dataset, and we could not reproduce the results reported in their paper. There are also open issues on their GitHub repository, where several researchers reported that they could not reproduce their results for face recognition. The configuration for training face recognition in Arc2Face paper is neither available, which makes it more difficult to make a fair comparison.\\nHowever, considering the reviewer's concern, we conducted a new experiment, and as suggested by Reviewer 3 (YfZL), we used random embeddings (without HyperFace optimization) to generate a synthetic dataset. As the results in our new experiment in Appendix E show, our HyperFace optimization results in a synthetic dataset with better quality which achieves superior performance.\\n\\n\\nWe hope we could adequately address the reviewer's concerns. \\nIn case the reviewer found our reply convincing, we appreciate it if the reviewer can increase their scores and rating.\\nWe are happy to continue the discussion if any part remains unclear.\"}", "{\"title\": \"Authors Reply to Reviewer YfZL [Part 1/2]\", \"comment\": [\"We thank the reviewer for their time in reviewing our paper and for their valuable comments. We are happy that the reviewer found our paper *\\\"good in writing and has a solid mathematical formulation\\\"*. Below, we tried our best to address the concerns raised by the reviewer:\", \"**Reply to Weakness #1 [Major] (motivation)**: In general, in each face recognition dataset (either synthetic or real), the variations in images are very important: especially, it is necessary to have a diversity in the identity (inter-class variation) and also variations for images of each subject (intra-class variation). As a matter of fact, having sufficient variations in a dataset has a direct impact on the generalization capability of the face recognition model (trained with that dataset) and its performance on different benchmarks. So, it can be useful to see how we can represent a face recognition dataset in a loower dimension (compared to image dimensions) and then see how we can use such representation to improve synthetic dataset generation.\", \"Previous work, e.g., [1,2], showed that we can study face images on embedding space of a face recognition model. The normalized embedding space of a pretrained face recognition model can shape a hypersphere (i.e., n-ball), and we can represent the face dataset on the hypersphere by using the embeddings of images. In order to have a larger variation in a dataset, we would like the face embeddings to cover the most areas on the hypersphere. Therefore, our question is how to distribute identities (using their embeddings) on the face hypersphere. This leads to a packing problem to find an optimum representation for a given number of identities. While representing face images on the hypersphere of a pretrained face recognition model has been used and studied in the literature [1,2], to our knowledge our paper is the first work that formulates the identity sampling for synthetic dataset generation as a packing problem.\", \"We would like to stress that intra-class variation can be improved by using conditional face generator models (for pose, light conditions, etc.) when generating the synthetic dataset. However, still the major problem is how to increase inter-class variation, which is the focus of our paper and we tried to see how we can improve inter-class variation assuming we can sample identites from face embedding hypersphere.\", \"[1] Terh\\u00f6rst, et al. \\\"On the (limited) generalization of masterface attacks and its relation to the capacity of face representations\\\", IEEE International Joint Conference on Biometrics (IJCB), 2022.\", \"[2] Boddeti, et al. \\\"On the biometric capacity of generative face models\\\", IEEE International Joint Conference on Biometrics (IJCB), 2023.\", \"**Reply to Weakness #2 [Major] - Point 2 (Comparison with Arc2Face - Table 1)**: For a fair comparison in our experiments for Table 1, we fixed all hyperparameters and trained face recognition models for all datasets with the same configuration (backbone, loss function, batch size, number of epochs, etc.). Therefore, we could only compare with methods that have publicly available datasets. Whereas the authors of Arc2Face have not published their dataset, and we could not reproduce the results reported in their paper. There are also open issues on their GitHub repository, where several researchers reported that they could not reproduce their results for face recognition. Since our training setup is different, we believe the numbers reported in our experiments are not comparable with the values reported in the Arc2Face paper. The configuration for training face recognition in Arc2Face paper is neither available, which makes it more difficult to make a fair comparison.\", \"**Reply to Weakness #2 [Major] - Point 1 (Comparison with Arc2Face - Ablation study)**: Following the reviewer's suggestion, we conducted a new experiment and used random embeddings (without HyperFace optimization) to generate a synthetic dataset. We also ensure that the similarity of each pair is below a certain threshold (0.3). As the results in Appendix E show, our HyperFace optimization results in a synthetic dataset with better quality which achieves superior performance.\"]}", "{\"title\": \"Authors Reply to Reviewer khYx [Part 1/2]\", \"comment\": \"We thank the reviewer for their time in reviewing our paper and for their insightful comments. We are happy that the reviewer found our paper with a *well-formulated optimization problem, efficient solution, extensive experiments, and ethical considerations*. Below, we tried our best to address the remaining concerns raised by the reviewer:\\n\\n- **Reply to Weakness #1 (Limited scale)**: Following the reviewer's suggestion, we added a new experiment and further increased the size of the dataset to 50k identities in Table 3. We should note that our dataset with 50k identities and 64 samples per identity (3.2M images) is larger than publicly available synthetic datasets compared from the literature. The results in this table demonstrate that we can still increase the number of identities and scale our dataset generation without saturating the performance: \\n\\n| # IDs | LFW | CPLFW | CALFW | CFP | AgeDB |\\n|---|:---:|:---:|:---:|:---:|:---:|\\n| 10k | 98.67 | 84.68 | 89.82 | 89.14 | 87.07 |\\n| 30k | 98.82 | 85.23 | 91.12 | 91.74 | 89.42 |\\n| 50k | 98.27 | 85.6 | 91.48 | 92.24 | 90.4 |\\n\\nWe also provided a comparison with synthetic datasets at scale in Appendix C of the revised version of the paper.\\nIn addition, we provided an in-depth discussion of the computation requirement in Appendix A and further improvements in Appendix B of the revised version of the paper. In particular, we propose stochastic optimization for our approach and theoretically prove that the stochastic optimization leads to similar results. In addition, we report numerical results which meet our theoretical analyses.\\n\\n- **Reply to Weakness #2 (Inter-class variation)**: We acknowledge the reviewer's comment that the main focus of our work is on increasing inter-class variation in the synthetic datasets. However, we would like to stress that increasing inter-class variation is less explored in the literature, whereas there are extensive studies in the literature on increasing intra-class variation in face image generation. For example, training conditional generator models, or using ControlNet can easily help to improve intra-class variation in the image generation process. Meanwhile, we still used a simple yet effective approach to increase intra-class variation for our synthetic dataset, where we proposed to add a Gaussian noise controlled by hyperparameter $\\\\beta$ to the embedding of each synthetic identity in the image generation process. The added noise to the embeddings simulates the change in embedding caused by image variation (such as lightning conditions, etc.). We also provided an ablation study on the effect of hyperparameter $\\\\beta$ on the performance of the generated dataset in Table 7 of the paper.\"}", "{\"title\": \"Authors General Response to Reviewers\", \"comment\": \"We thank all reviewers for their time and valuable comments which helped us improve the quality of our paper. We carefully considered every point raised by the reviewers and revised the paper accordingly and provided point-by-point responses in our reply to each reviewer.\", \"the_summary_of_important_changes_in_the_revised_version_of_the_paper_is_as_follows\": [\"We generated a larger dataset with 50k identities and 3.2M images. The new dataset achieves better performance and shows the scalability of our dataset generation. We added the results to Table 3 of the paper.\", \"We provide an evaluation of the complexity and computation requirement for generating synthetic datasets with our method in Appendix A of the paper. As described in our initial submission, the complexity of HyperFace optimization is quadratic with respect to the number of identities. However, in our initial submission, we suggested that it can be significantly reduced by stochastic optimization. This aspect of our work received the attention of reviewers who requested more in-depth analyses. Therefore, in Appendix B of the paper, we first theoretically show that stochastic optimization leads to similar results but with less complexity, and then evaluate it with experiments. Our experimental results meet our theoretical prediction and stochastic optimization significantly reduces the complexity while maintaining performance.\", \"We provided a comparison of synthetic datasets in the literature at scale in Appendix C, where our dataset acheives competetive performance with largest synthetic datasets in the literature.\", \"We investigate the effect of identity leakage on the recognition performance of face recognition models trained with synthetic datasets in Appendix D.\", \"We also provided additional ablation studies to investigate the effectiveness of our method in Appendix E.\", \"We provided pseudo-code for the entire data generation process in Appendix F and provided more sample images in Appendix G.\", \"We expanded our discussions in the experiments section.\", \"We hope we could adequately address the reviewers' concerns.\", \"We kindly invite all reviewers to further discussions if any part remains unclear.\"]}", "{\"title\": \"New Revision (Revision #2)\", \"comment\": \"Following a concern raised by Reviewer RHfp, in a new revision, we revised **Solving the HyperFace Optimization** in **Section 2.2**, and further explained the previous studies on the Tammes problem. As mentioned earlier, the equation (1) of our paper represents the Tammes problem. The Tammes problem is, however, an open problem and the optimal solutions for this problem are studied for small dimensions and a small number of points. Nevertheless, for large dimensions and a high number of points there is no closed-form solution. There are different approaches for solving this problem (such as geometric optimization, numerical optimization, etc.) for large dimensions and a high number of points. However, for a large dimension (e.g., 512) and a *very* large number of points (e.g., 10k identities) solving this problem with geometric optimization or numerical optimization is computationally very expensive. Hence, we solve this problem with a gradient descent approach, which allows us to solve the optimization with a reasonable computation resource.\\nIt is noteworthy that in Appendix A, we report the computation required to solve our optimization with our method and further reduce the computation with a stochastic optimization in Appendix B, where we demonstrated theoretically and empirically that stochastic optimization reduces the complexity while resulting in a comparable performance.\"}", "{\"summary\": \"The paper presents an approach to generate a synthetic face dataset for face recognition problem by formulating it as an optimization problem and solve it iteratively via gradient descent method to obtain optimized embeddings. Then the synthesize face images can be generated using pre-trained face generator models from those embeddings.\\nThe experiment shows that the models trained with the proposed synthetic datasets can achieve SOTA performance.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. Well-formulated optimization problem: The paper effectively defines the optimization problem for generating high-quality synthetic face recognition datasets.\\n2. Efficient solution: The proposed solution is not only effective but also computationally efficient.\\n3. Extensive experiments: The paper presents comprehensive experiments on various synthetic datasets to validate the approach.\\n4. Ethical considerations: The authors acknowledge potential ethical concerns, such as identity leakage, demonstrating a responsible approach to AI development.\", \"weaknesses\": \"1. Limited scale: While the experiments on datasets with up to 30K identities provide valuable insights, evaluating the method's performance on significantly larger datasets is crucial. Datasets with 100K and 300K identities would be particularly informative as they would reveal how the method scales and whether performance degrades with increased identity count and potential data noise. This would provide a more comprehensive understanding of the method's robustness and real-world applicability.\\n2. Narrow focus: The paper's focus on improving inter-class variation is important, but expanding the scope to address intra-class variability would significantly enhance its impact. Specifically, exploring how the method handles variations in pose, illumination, and expression within the same identity would be valuable. Additionally, investigating the method's robustness to occlusions or image quality degradation would further strengthen the evaluation and demonstrate its potential for real-world scenarios.\", \"questions\": \"1. What is the computation resource and time needed to generate larger scale datasets, e.g. n_id = 30k or more?\\n2. It would be interesting to see if we can use the FR model trained by the proposed synthetic dataset to build another good synthetic dataset\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Reply to Reviewer UATk [Part 1/2]\", \"comment\": [\"We thank the reviewer for their time in reviewing our paper and for their insightful comments. We are happy that the reviewer found our paper *well written, easy to read and understand*, with *interesting idea*. Below, we tried our best to address the remaining concerns raised by the reviewer:\", \"**Reply to Weaknesses**: In the revised version of the paper, we added results for a larger version of our dataset with 50k identities. Moreover, we provide an in-depth discussion about the computation requirement in Appendix A and further improvements in Appendix B of the revised version of the paper. In our initial submission, we discussed the complexity of our optimization as its limitation and mentioned how this can be improved. In the revised version of the paper, we further explain our approach to reduce the computation required and propose stochastic optimization for HyperFace. We theoretically prove that stochastic optimization leads to similar results with less computation. In addition, we report numerical results for stochastic optimization which meet our theoretical analyses.\", \"**Reply to Question #1**: Reference embeddings and gallery embeddings can be independently initialized. They can also have different numbers of images. For example, Table 4 reports the recognition performance achieved for the face recognition model trained with datasets with 10k identities and optimized with different numbers of gallery images. The reference and gallery images can be generated by different generator models, such as StyleGAN or a diffusion model (e.g. LDM). We also reported an ablation study on the source of images in Table 5 of the paper.\", \"**Reply to Question #2**: Yes, for a fair comparison we fixed all training configurations (backbone, loss function, batch size, numbers of epochs, etc.) and trained new face recognition models for all datasets.\", \"**Reply to Question #3**: Following the reviewer's comment, we updated our model in Table 1 and used a dataset with 500k images. With 500k images, the ranking of synthetic datasets over different benchmarks remains unchanged. It is noteworthy that we also have an ablation study on the number of images in Table 2 of the paper.\", \"**Reply to Question #4**: Following the reviewer's suggestion, we expand Table 1 with more baselines with larger datasets in Appendix C. As the results for comparing previous datasets at scale show, our synthetic dataset achieves competitive performance with the largest synthetic datasets in the literature. The larger version of DCFace does not achieve the best performance on any benchmark. Langevin-DisCo achieves significant improvement with 30k identities, however, the authors of Langevin-DisCo have reported lower performance for 50k identities, indicating limitations in further scaling in Langevin-DisCo. In Table 3 of the revised version of the paper, we scaled our dataset for 50k identities. The results show improvement in our performance on the majority of benchmarks (without saturation), which shows our dataset can be further scaled. Our dataset with 50k identity and 3.2M is larger than synthetic datasets compared from the literature and achieves competitive performance with previous large-scale synthetic datasets as reported in Appendix C.\"]}", "{\"summary\": \"This paper proposes a synthetic data generation method for FR that aims to improve inter-class variation compared to existing methods. The approach utilizes the embedding space of a pretrained FR model to create an initial gallery, then optimizes these embeddings to uniformly position identities on a unit hypersphere. A conditional diffusion generator is subsequently used to synthesize faces.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The paper is well-written and easy to follow.\\n2. The empirical study shows that the proposed method generalizes well across different validation sets.\\n3. Experimental results are promising, with satisfactory improvements on benchmark datasets.\", \"weaknesses\": \"1. HyperFace Optimization: Figure 2 shows that HyperFace optimization results in a uniform distribution of points on the hypersphere. However, the paper lacks evidence, analysis, or experiments demonstrating that Equation 1 is minimized by this uniform distribution. See [1] for more details.\\n\\n2. Use of Hyperspherical Points: One key aspect of the method is the use of uniformly positioned points on the hypersphere. While using a pre-trained FR model for generating the identity gallery is reasonable, it would be helpful to see results without applying HyperFace (using pure FR embedding as condition). How would this affect the results?\\n\\n3. Experimental Section: The experiments section needs major revisions. Most of the subsections only report table results without in-depth discussion or analysis. For a conference of ICLR\\u2019s quality, it\\u2019s important to explain specific behaviors and insights from the proposed method.\\n\\n4. Additional Experiments: Additional experiments could improve clarity on the benefits of the method. For example, while Table 3 presents an ablation study on the number of identities on FR performance, it would also be valuable to show how many novel identities the method can generate (what is the saturation point for the number of novel identities). \\n\\n5. Identity Leakage: The paper mentions identity leakage but lacks in-depth experiments on the synthesized data. What would the performance look like if synthesized images with high similarity to real datasets (e.g., CASIA) were excluded?\", \"questions\": \"My main concerns are threefold: (1) the lack of theoretical and empirical analysis on HyperFace optimization (Equation 1), (2) missing ablations and detailed analysis of results, and (3) understanding the limitations of the method in generating novel identities. I am hopeful these concerns can be addressed during the discussion period, and I am open to increasing my score based on the responses. Please also see the weaknesses\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"NA\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"please see my comment\", \"comment\": \"Thanks to the authors for addressing my concerns. I understand that the regularization term of equation 2 prevents the uniform distribution. My point concerns equation 1 and its optimization( lines 105-148) which lacks theoretical and empirical analysis. papers solely says \\\"we solve the optimization problem with an iterative approach based on gradient descent\\\" and does not provide insight into the optimization and the cost function for it.\"}", "{\"title\": \"Authors Reply to Reviewer Feedback [Part 2/2]\", \"comment\": \"**Reply to Point #2 (Visualization: Age-related Variations):** We acknowledge the reviewer's concern that the generated images in our method do not include *very high* intra-class variations (such as aging) for each subject. This is particularly because the main focus of the paper has been on increasing inter-class variations through HyperFace optimization. However, increasing the intra-class variations has been extensively studied in the literature and there are numerous works in the literature to increase variations for each image, including aging. While adding more intra-class variation is expected to further enhance the performance, our dataset still achieves competitive performance with state of the art in the literature. This can further elaborate the importance of inter-class variations which is the focus of our work since with a limited intra-class variation we could achieve comparable performance with the literature.\\n\\nWe would be glad to continue the discussion if any part remains unclear.\"}" ] }
4YpMrGfldX
Scaling Transformers for Low-Bitrate High-Quality Speech Coding
[ "Julian D Parker", "Anton Smirnov", "Jordi Pons", "CJ Carr", "Zack Zukowski", "Zach Evans", "Xubo Liu" ]
The tokenization of audio with neural audio codec models is a vital part of modern AI pipelines for the generation or understanding of speech, alone or in a multimodal context. Traditionally such tokenization models have concentrated on low parameter-count architectures using only components with strong inductive biases. In this work we show that by applying a transformer architecture with large parameter count to this problem, and applying a flexible Finite Scalar Quantization (FSQ) based bottleneck, it is possible to reach state-of-the-art speech quality at extremely low bit-rates of $400$ or $700$ bits-per-second. The trained models strongly out-perform existing baselines in both objective and subjective tests.
[ "Audio coding", "neural audio codecs", "transformers" ]
Accept (Poster)
https://openreview.net/pdf?id=4YpMrGfldX
https://openreview.net/forum?id=4YpMrGfldX
ICLR.cc/2025/Conference
2025
{ "note_id": [ "w9s6zv9FVc", "lhJFUTKczb", "hpuTNzk5jS", "bKnLyM9ZC4", "Z1heBX5lMP", "Y4lI5npf9D", "SwfF75l6mC", "RgcefXFDf6", "RIClYVXDPU", "QKaeX4Uvln", "QBc66ssbUu", "NxCb7VPAG0", "MrXYvnM0I5", "MaDItTLurm", "JSVjUIZDYM", "IRgrf6kfut", "GRjJ9n5HQh", "C9h0iSnDzh", "98fLrqrloZ", "5ZMrh8ZA72", "0rSjo2LNgh", "0Y3ARyBmfI" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_review" ], "note_created": [ 1732301374651, 1732301032849, 1732812609891, 1732301275108, 1732894097420, 1734833850630, 1732645555994, 1732301597809, 1732301182290, 1732300815143, 1737524093270, 1732776637226, 1732438731077, 1732818662336, 1732300944404, 1730667468983, 1732644804434, 1732300680403, 1730697589488, 1730664874968, 1732301497034, 1730764285836 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission10943/Authors" ], [ "ICLR.cc/2025/Conference/Submission10943/Authors" ], [ "ICLR.cc/2025/Conference/Submission10943/Authors" ], [ "ICLR.cc/2025/Conference/Submission10943/Authors" ], [ "ICLR.cc/2025/Conference/Submission10943/Authors" ], [ "ICLR.cc/2025/Conference/Submission10943/Area_Chair_xrrA" ], [ "ICLR.cc/2025/Conference/Submission10943/Authors" ], [ "ICLR.cc/2025/Conference/Submission10943/Authors" ], [ "ICLR.cc/2025/Conference/Submission10943/Authors" ], [ "ICLR.cc/2025/Conference/Submission10943/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "~Zhen_Ye2" ], [ "ICLR.cc/2025/Conference/Submission10943/Reviewer_bLHu" ], [ "ICLR.cc/2025/Conference/Submission10943/Reviewer_Baq6" ], [ "ICLR.cc/2025/Conference/Submission10943/Authors" ], [ "ICLR.cc/2025/Conference/Submission10943/Reviewer_bLHu" ], [ "ICLR.cc/2025/Conference/Submission10943/Authors" ], [ "ICLR.cc/2025/Conference/Submission10943/Authors" ], [ "ICLR.cc/2025/Conference/Submission10943/Reviewer_VKr9" ], [ "ICLR.cc/2025/Conference/Submission10943/Reviewer_Baq6" ], [ "ICLR.cc/2025/Conference/Submission10943/Authors" ], [ "ICLR.cc/2025/Conference/Submission10943/Reviewer_1mKc" ] ], "structured_content_str": [ "{\"title\": \"Response to reviewer Baq6 (Part 1/3)\", \"comment\": \"Dear reviewer Baq6,\\n\\nThank you for your efforts in reviewing this paper, and for raising your concerns with the work. We have grouped your questions and comments and provided detailed responses to each point below.\\n\\n> Q1: I do not think it is novel to scale the parameter count of a neural network autoencoder and demonstrate better compression ratios/reconstruction compared to smaller architectures. This is a well known result.\\n\\nWe agree that scaling is a well known result in other fields (e.g. LLMs), which is one of the primary motivations for us pursuing this direction (as articulated in **Sec.1**). However, we\\u2019re not aware of existing work showing that **audio codec** performance scales with parameter count. If we\\u2019re missing some important work that establishes this, it\\u2019d be much appreciated if you could share it. Our opinion (explained in **Sec.1**) is that this was not a goal for previous codec models as they were more directly targeting the transmission/storage use-cases, which benefit much more from small parameter-efficient models. This is also likely to be related to the overall difficulty of scaling CNNs to the billion parameter level (see e.g. ConvNeXTv2). In our work we establish that:\\n1. Leveraging transformer blocks as the primary component of a model is viable for an end-to-end audio waveform codec, once a number of architectural challenges are overcome.\\n2. Scaling parameter count does indeed provide improved performance in audio waveform coding (which we also now demonstrate more directly in the new ablation in **Appendix A.2**).\\n\\nWe strongly believe that both are novel contributions to the field of audio coding, which will be helpful in informing future work.\\n\\n> Q2: I do not think it is novel to restrict the domain of a neural network autoencoder in comparison to another architecture trained on a more general domain and demonstrate better compression ratios/reconstruction. This is a well known result. I do not think DAC, Encodec, and SemantiCodec are reasonable baselines to compare to, as none claim to be English speech codecs.\\n\\nWe are certainly not claiming this as a point of novelty, nor is it something we are trying to demonstrate. We acknowledge the relatively limited domain (English speech) as a limitation of this initial iteration of our model, and we intend to address this by scaling to more diverse datasets in the future. This has been made more explicit in **Sec.3.7**. We would argue that the presented results, along with the new experiment in **Appendix A.5** which shows good generalization to unseen languages, strongly suggest that model performance will increase even further when extended to a larger and more diverse dataset. Our opinion is that this enhances the attractiveness of this architecture as the basis for future work.\\n\\nRegarding the fairness of baselines choices, we agree that all the chosen baselines were not exactly designed with the same goals as our proposed model, nor necessarily trained on the same data. This is not an intentional choice on our part, as we\\u2019re restricted by which models are released to the public. The chosen baselines represent generally the most widely used and positively considered neural codecs for speech, showing strong performance. Hopefully there will be fairer baselines available in the future for this comparison, as larger neural audio codecs specifically targeting downstream generative use cases become more prevalent.\"}", "{\"title\": \"Response to reviewer VKr9\", \"comment\": \"Dear reviewer VKr9 ,\\n\\nThank you for reading our paper and for providing such insightful comments. We made the following changes to address your review:\\n\\n>Q1: \\u201cIt would be great if the author can present the results with causal encoder so that it can be compared with DAC/Encodec/Mimi in a relative fair comparison (apart from the model size difference).\\u201d\\n\\n* We trained a causal version of the codec as a finetune from the main presented model. This is presented in **Appendix A.4**. We achieved this by switching convolutions to causal versions and changing the sliding attention window to be purely causal. The model performs very close to the main model presented in the paper, so the trade-off for applying the model in a streaming situation appears to be minimal, assuming sufficient computation resources are available to execute the model. We did not want to alter the main intention of the paper (we were not concentrated on streaming use-cases), so this is kept as an extension rather than used to replace the non-causal version of the model in the main comparisons. New audio examples created by the causal TAAE model can be found on the anonymous website.\\n\\n| **Model** | **BPS** | **SI-SDR \\u2191** | **Mel \\u2193** | **STFT \\u2193** | **PESQ \\u2191** | **STOI \\u2191** | **MOSNet \\u2191** |\\n|-----------------------|---------|--------------|-----------|------------|------------|------------|--------------|\\n| Mimi | 1100 | 2.20 | 0.94 | 1.31 | 3.01 | 0.90 | 3.24 |\\n| TAAE (causal) | 700 | 4.04 | 0.94 | 1.31 | 3.09 | 0.92 | 3.34 |\\n| TAAE (non-causal) | 700 | 4.73 | 0.86 | 1.26 | 3.09 | 0.92 | 3.36 |\\n\\n* We have added some discussion in **Appendix B.2** which compares the receptive fields, causality and latency of our proposed model and the baselines.\\n\\nThank you again for your reviewing efforts. We\\u2019re happy to address any more comments or concerns you may have.\"}", "{\"title\": \"Answers to Zhen's questions\", \"comment\": \"Hi Zhen,\\n\\nThank you for your kind words. We're happy that you enjoyed the paper!\\n\\n>I'm sorry\\u2014I really wanted to try your Patched Transform scheme, but I'm not familiar with DSP, so implementing the code is hard for me.\\n\\nHere's an implementation for you:\\n```\\nclass PatchedPretransform(nn.Module):\\n def __init__(self, channels, patch_size):\\n super().__init__()\\n self.channels = channels\\n self.patch_size = patch_size\\n def encode(self, x):\\n x = rearrange(x, \\\"b c (l h) -> b (c h) l\\\", h=self.patch_size)\\n return x\\n def decode(self, z):\\n z = rearrange(z, \\\"b (c h) l -> b c (l h)\\\", h=self.patch_size)\\n return z\\n```\\n> Have you tried the 50Hz x1 setting? Or, what performance do you expect your method to achieve under this setting?\\n\\nYes, we've tried this - although we haven't fully trained a model at this temporal resolution as we were targeting a lower bitrate. The reconstruction performance should be better than the 25Hz model we presented (assuming the bottleneck is the same) as the compression ratio is less.\\n\\n>I noticed that the number of channels in your discriminator implementation is 256. I'm using the MRD from DAC (link). I tried setting the channels to 256, and the discriminator's parameters are about 100M. Is this correct? How does it compare to the number of parameters in your discriminator?\\n\\nWe're using the Encodec discriminator which has less params in general than the DAC discriminator. Total param count is 35.5M.\\n\\n>Regarding SYSTEMATIC BIAS, I tried it but suspect there may be an issue with my implementation. Here's my code; I'm not sure if I understood your paper correctly:\\n\\nYour code looks correct, but you'll need to set `normalized = True` in the `torch.stft` call - otherwise it'll produce very large values. This is turned on in Encodec (I'd assume also in DAC) as otherwise you'll get huge activations inside your discriminator.\\n\\n>About the encoder, I tried using mel + Transformer or STFT + Transformer, but the former resulted in worse performance, and the latter's performance was similar to DAC's encoder (about 40M parameters). I'm not sure if your method has experimented with different encoders. If you still use the CNN-based DAC encoder, is there a significant difference in your method? My thinking about the encoder is that since the encoder's features have to go through VQ, it means that no matter how powerful your encoder is, after VQ, it can only retain key information. Does this mean that for the encoder, maybe we don't need a very powerful model\\u2014perhaps DAC's encoder is sufficient?\\n\\nOur feeling is that a powerful encoder actually helps the model effectively communicate information through the very restrictive FSQ bottleneck. We did experiment with smaller encoders, but we found that this makes the decoder act more like a generative model - we could still get plausible speech output but the alignment with the input was less precise. Investigating this relationship in more detail (and if it holds with VQ instead of FSQ) could be a very interesting line of future research.\\n\\nThanks again for your comments! Happy to discuss more if you have further questions.\"}", "{\"title\": \"Response to reviewer bLHu (Part 2/2)\", \"comment\": \">Q4: Beyond length generalization, can TAAE perform streaming encoding/decoding (as most of the existing works compared in this paper)? If so, what is the size receptive field? how does it affect the latency of the codec? how does it compares to conventional CNN-based codec models?\\n\\nWe trained a causal version of the codec as a finetune from the main presented model. This is presented in **Appendix A.4**. We show objective metrics vs our main model and vs Mimi, which show only minor degradation in causal mode. We\\u2019ve also included discussion of latency in **Appendix B.2**. The anonymous website has been updated with new audio examples generated by the causal TAAE model.\\n\\n| **Model** | **BPS** | **SI-SDR \\u2191** | **Mel \\u2193** | **STFT \\u2193** | **PESQ \\u2191** | **STOI \\u2191** | **MOSNet \\u2191** |\\n|-----------------------|---------|--------------|-----------|------------|------------|------------|--------------|\\n| Mimi | 1100 | 2.20 | 0.94 | 1.31 | 3.01 | 0.90 | 3.24 |\\n| TAAE (causal) | 700 | 4.04 | 0.94 | 1.31 | 3.09 | 0.92 | 3.34 |\\n| TAAE (non-causal) | 700 | 4.73 | 0.86 | 1.26 | 3.09 | 0.92 | 3.36 |\\n\\n>Q5: I believe the fundamental differences between TAAE and CNN-based codec models should be discussed in the paper more throughout and carefully. Both advantages and disadvantages should be clearly stated and summarized in the main body of the paper.\\n\\nWe\\u2019ve added much more context and discussion about CNNs vs transformers, including some broader discussion of rationale in **Sec.1** and **Sec.2.1** within the main text and some more detailed explanation behind our motivations in **Appendix B.3**. Our main motivation for exploring a transformer-based architecture is the scaling properties shown in other domains, but we also believe that attention may fundamentally be better placed to address the irregular distribution of information in audio and speech waveforms.\\n\\nThank you again for your extremely helpful perspectives and feedback. If you have any more comments or concerns, we\\u2019re happy to address them.\"}", "{\"title\": \"Thanks to reviewer Baq628\", \"comment\": \"Hi reviewer Baq6,\\n\\nThank you for reviewing our changes and increasing your rating. We agree with you about the shortcomings of MOSNet. It was included primarily due to it's inclusion in recent related works (e.g. the Moshi/Mimi paper).\\n\\nThank you again for your work on reviewing this paper. We are glad that you raised the objections that you did, as addressing them has made this a stronger paper.\"}", "{\"metareview\": \"**Paper Summary:**\\n\\nThis paper describes low-bitrate neural discrete audio codec. This is achieved by scaling up an encoder-decoder Transformer architecture using finite scalar quantization (Mentzer et al., ICLR 2024) in contrast to the more traditional a VQ-VAE discretization. Experimental results are strong for both quantitative metrics and human evaluation.\\n\\n**Strengths:**\\n\\nThere is general consensus that the empirical analysis is strong, demonstrating a new state-of-the-art in low bit rate speech compression.\\n\\nAll reviewers praised the paper's insights. There is also mostly positive praise for the paper's clarity and organization.\\n\\nThe authors are committed to releasing the artifacts of this work, which will be of great value to the community.\\n\\n**Weaknesses:**\\n\\nAs Reviewers 1mKc and Baq6 point out, achieving better results by scaling up neural networks is no longer a surprising result for the community. On the other hand, this work identifies and documents many valuable insights into the architectural decisions required to effectively scale neural audio codecs.\", \"additional_comments_on_reviewer_discussion\": \"Authors addressed most of the concerns raised by reviewers in the discussion period, and have revised the paper accordingly to reflect these highly productive discussions. The paper has improved considerably over the course of the reviewing cycle.\"}", "{\"title\": \"Reminder to check our changes\", \"comment\": \"Hi reviewer Baq6,\\n\\nWe'd really appreciate if you could take a look at the changes we made to address you're concerns. We think the results on multi-lingual generalisation are especially interesting! We also made some significant changes to address your other concerns. We're available here to discuss.\"}", "{\"title\": \"Response to reviewer Baq6 (Part 3/3)\", \"comment\": \"> Q5: It is unclear why SpeechTokenizer was left out of the perceptual evaluation, as it is the most comparable to TAAE in terms of architecture and training domain. Comparison to SpeechTokenizer could also boost claims that the FSQ significantly outperforms RVQ schema. Why was SpeechTokenizer not included in the perceptual evaluation?\\n\\nThank you for your suggestion. We agree that including SpeechTokenizer in the perceptual evaluation is important. We have now incorporated the SpeechTokenizer (1500 bps) variant into our new subjective test. The results show that SpeechTokenizer (1500 bps) outperforms SemantiCodec (340 and 680 bps) and Mimi (550 bps) but performs worse than Mimi (1100 bps) and TAAE (400 bps and 700 bps). The updated results are presented in Figure 2.\\n\\n> Q6: I think the presented MOS scores are confusing, as MOSNet has estimated the MOS closer to 3 than to 5. How do you explain the gap in your perceptual evaluation MOS score and the estimate provided by MOSNet?\\n\\nThank you for your observation regarding the MOS scores. To clarify, MOSNet is a neural network model trained to correlate with human MOS ratings, but its estimates are influenced by the quality and characteristics of its training data - which may introduce some inaccuracies or biases. Furthermore, in our previous MOS test, the goal was to evaluate how the reconstructed audio sounds compared to the ground truth. Given that the test data's quality may not be exceptionally high, there is naturally some difference between the MOS scores and the estimates provided by MOSNet. The human MOS scores in our previous test reflect subjective evaluations based on the perceived quality of the reconstructions relative to the ground truth, while MOSNet estimates are based on its learned correlations regarding speech quality. Our evaluated MOSNet scores for the baselines are consistent with the results reported in the Moshi (Mimi) technical report.\\n\\n> Q7: A MUSHRA evaluation should have been used instead for pairwise comparison between codecs and is standard in the literature cited in this paper. Why was a MOS evaluation chosen instead of MUSHRA? Furthermore, the authors should include demographic breakdowns of the perceptual evaluation, as well as a description of the listening setup, as is standard for speech codecs. Who was included in the perceptual evaluation, and what was their listening setup?\\n\\nWe re-ran the perceptual tests using a MUSHRA setup and included SpeechTokenizer as an additional baseline. While the results are not significantly different from our previous MOS evaluation, we agree that the MUSHRA format is more appropriate, as it aligns better with standard references in the literature. Additionally, we provided further details about the listening setup in Section 3.3 and included demographic breakdowns of the MUSHRA evaluation in Appendix E.\\n\\n| Model | MUSHRA Score |\\n|---------------------------|--------------|\\n| SemantiCodec (340 bps) | 51.06 |\\n| Mimi (550 bps) | 57.48 |\\n| SemantiCodec (680 bps) | 59.13 |\\n| SpeechTokenizer (1500 bps)| 70.67 |\\n| Mimi (1100 bps) | 78.62 |\\n| TAAE (400 bps) | 88.17 |\\n| TAAE (700 bps) | 89.50 |\\n| Ground Truth | 92.20 |\\n\\nThank you for your many insightful comments and suggestions. We hope you agree that the paper is much improved with these additions. If you have further topics for us to address, please share and we will be happy to tackle them.\"}", "{\"title\": \"Response to reviewer bLHu (Part 1/2)\", \"comment\": \"Dear reviewer bLHu,\\n\\nThank you for your efforts in reviewing this paper, and for your many insightful comments. We\\u2019ve tried to address your concerns with the following changes:\\n\\n>Q1: This appears to be an interesting observation and a critical hyper-parameter for training the model as the authors spent a paragraph discussing it, but neither the exact value nor the study/experiment on \\u03f5 is provided.\\n\\nIn **Appendix B.1** we have added some further discussion about the \\u03f5 value of the layer norms, as well as the exact value used in the experiments. Thank you for spotting this missing piece of information.\\n\\n>Q2: The overall objective of the model is not explicitly given but described in a more hand-wavy style instead, which could easily lead to misunderstanding. The full objective should be listed explicitly together with the weight/importance for each term/component in the main paper or appendix.\\n\\nWe have expanded **Sec. 2.4**. to give much more detail about the training objective, in both the pretraining and finetuning stages. \\n\\n>Q3: What is the trade-off between length generalization and sliding window size for TAAE? How do time complexity and empirical inference time change accordingly? How do these numbers compare to those of CNN-based models?\\n\\n* We\\u2019ve added **Appendix B.2** discussing the relative receptive fields of our proposed model and the baselines. Interestingly, our proposed model does not have a wildly different receptive field to existing models - partly as very few are purely convolutional. Several baselines use RNNs with effectively unlimited receptive fields, and use chunked inference to counteract any downside from this. \\n\\n* We\\u2019ve added an experiment addressing length generalization by testing inference of the TAAE and baselines with a variety of utterances lengths. This can be found in **Appendix A.6**. The results show mild degradation at longer utterances, which can be mitigated if necessary by chunked inference.\\n\\n* We\\u2019ve added an experiment which examines empirical inference time. This can be found in **Appendix A.9**. This experiment shows that the TAAE architecture is competitive with the baselines in terms of empirical inference time, despite utilizing a much larger number of parameters. The architecture benefits greatly from the patched representation taking care of much of the down/upsampling, as well as the large amount of effort invested by other researchers and engineers in optimizing the components of the transformer architecture.\\n\\n* We have interpreted \\u2018time-complexity\\u2019 to mean Big O analysis with respect to varying sequence length. This is now discussed briefly in **Appendix B.2**. Our model (and all the baselines) scale as O(n) with sequence length.\"}", "{\"title\": \"Response to reviewer 1mKc (Part 1/2)\", \"comment\": \"Dear reviewer 1mKc,\\n\\nThank you for your efforts in reviewing this paper, and for the very insightful comments. We\\u2019ve made a number of modifications in order to address your concerns. \\n\\n>Q1: \\\"The proposed method relies on the dimension reduction part for its dimension-specific scalar quantization to work. And that's why they could achieve higher codebook utilization. Meanwhile, there is also a trend that higher codebook utilization leads to lower coding gain if entropy coding is applied after tokenization. Indeed, the paper does not mention anything about Huffman coding results, which the proposed method might not be able to take advantage of due to the low dimensionality and high codebook utilization. At the same time, the RVQ-based ones might have a better chance of compressing more via Huffman coding. I wish the paper provided an in-depth discussion about it. In my opinion, all the coding gain and performance-related arguments must be based on the entropy-coded bitrates of all codecs mentioned.\\\"\\n\\nOur initial draft did not contain discussions on the Huffman coded bitrates of the proposed model or the baselines, as our use-case for this model was not focused on transmission or storage of speech. However, we agree that this information is important and valuable to the wider speech coding community. To address this, we have added a new section in **Appendix A.8** that examines and computes the codebook utilisation and Huffman-coded bitrates for the proposed model and the baselines. Our analysis shows that Huffman coding achieves a moderate reduction in bits-per-second for earlier RVQ-based codecs such as Encodec. In contrast, more recent RVQ-based codecs (e.g., DAC, SpeechTokenizer, Mimi), which leverage advanced methods like EMA updates, factorized codes, and L2-normalized embeddings to boost RVQ codebook utilisation significantly, derive only marginal benefits from Huffman coding. Similarly, FSQ-based tokens produced by TAAE achieve comparable levels of gains in coding efficiency with Huffman coding to these modern RVQ models.\\n\\n>Q2: \\\"The other main criticism is that the proposed model is just a lot bigger than the other models. I don't mean to argue that a bigger codec necessarily results in a better coding gain, but in general, it is also true that there is a relation. I wish the paper had provided an ablation test that investigated the impact of the different sizes of the proposed model.\\\"\\n\\nWe conducted scaling experiments with TAAE architectures containing approximately 250M, 500M, and 1B parameters, which is shown in **Appendix A.2**. The results demonstrate that scaling up the parameter count yields clear improvements in objective metrics, although the smaller models still perform respectably compared to the baselines.\\n\\n| Param. Count | SI-SDR \\u2191 | Mel \\u2193 | STFT \\u2193 | PESQ \\u2191 | STOI \\u2191 |\\n|--------------|----------|-------|--------|--------|---------|\\n| 240M | 3.52 | 1.24 | 1.67 | 2.74 | 0.87 |\\n| 540M | 4.31 | 1.21 | 1.66 | 2.80 | 0.88 |\\n| 950M | 4.80 | 1.18 | 1.59 | 2.82 | 0.88 |\\n\\nWe would also like to highlight that scaling transformer-based codec architectures to 1B parameters is a key contribution of this work. Unlike traditional CNN-based codecs, TAAE uses a transformer-based architecture that offers enhanced scalability. However, in our early experiments we found it challenging to make Transformers work effectively for audio coding tasks. The success of the TAAE model is due to our specific architecture design, empirical findings, and optimizations, as described in **Section 2** and further analyzed in **Appendix B**.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"Excellent paper\", \"comment\": \"Hello,\\n\\nI am very, very impressed with your work, as I am currently researching codecs with single VQ and Transformer decoders. Your work has given me a lot of inspiration. If it's convenient, I would love to discuss it with you.\\n\\nFirst, I tried your proposed new discriminator FFT parameters {78, 126, 206, 334, 542, 876, 1418, 2296} to replace old one { 128, 256, 512, 1024, 2048} and found it works exceptionally well, improving the speaker similarity before and after from 0.80 to 0.82.\\n\\nMy current experiments are trained on 16kHz audio, using 50 tokens per second (50x1), with an STOI of about 0.91 and a PESQ of 3.01 and sim of 0.82. My decoder is a Transformer (hidden size 1024, 12 layers) with vocos. I'm sorry\\u2014I really wanted to try your Patched Transform scheme, but I'm not familier with DSP, so implementing the code is hard for me. I'm also using FSQ with a codebook size of 65536 (4^8).\\n\\nI have a few questions I'd like to ask you:\\n\\n1. Have you tried the 50Hz x1 setting? Or, what performance do you expect your method to achieve under this setting?\\n\\n2. I noticed that the number of channels in your discriminator implementation is 256. I'm using the MRD from DAC ([link](https://github.com/descriptinc/descript-audio-codec/blob/c7cfc5d2647e26471dc394f95846a0830e7bec34/dac/model/discriminator.py#L136)). I tried setting the channels to 256, and the discriminator's parameters are about 100M. Is this correct? How does it compare to the number of parameters in your discriminator?\\n\\n3. Regarding SYSTEMATIC BIAS, I tried it but suspect there may be an issue with my implementation. Here's my code; I'm not sure if I understood your paper correctly:\\n\\n```python\\nx_stft = torch.stft(x, fft_size, hop_size, win_length, window.to(x.device), return_complex=True)\\ngamma = 1/2\\nX_magnitude = torch.abs(x_stft)\\nx_stft = x_stft * torch.pow(X_magnitude + 1e-8, gamma)\\n```\\n\\n4. About the encoder, I tried using mel + Transformer or STFT + Transformer, but the former resulted in worse performance, and the latter's performance was similar to DAC's encoder (about 40M parameters). I'm not sure if your method has experimented with different encoders. If you still use the CNN-based DAC encoder, is there a significant difference in your method? My thinking about the encoder is that since the encoder's features have to go through VQ, it means that no matter how powerful your encoder is, after VQ, it can only retain key information. Does this mean that for the encoder, maybe we don't need a very powerful model\\u2014perhaps DAC's encoder is sufficient?\\n\\nIf you are busy, please don't feel pressured to reply immediately, but I am really looking forward to discussing this with you.\\n\\nThank you very much for your paper. I have discussed your paper with many collaborators working on codec, and we all think it's an excellent paper. Your discussions on various technical details are really great. I'm very certain that your work will have a significant impact on the development of audio codec!\\n\\nLooking forward to your reply!\"}", "{\"comment\": \"I would like to thank the authors for addressing my concerns. With the additional implementation details and in-depth analysis, I believe this work can make a significant contribution to the field. I have raised my rating accordingly.\"}", "{\"comment\": \"Thank you for your in-depth response. I very much appreciate the extra work you've done to address my concerns. The multilingual evaluation is particularly illuminating.\\n\\n>However, we\\u2019re not aware of existing work showing that audio codec performance scales with parameter count. If we\\u2019re missing some important work that establishes this, it\\u2019d be much appreciated if you could share it.\\nA quick scan of \\\"High-Fidelity Audio Compression with Improved RVQGAN\\\" by Kumar et al shows the following sentence at the beginning of Section 4.5 Ablation Study: \\\"We find that varying the decoder dimension has some effect on performance, with smaller models having consistently worse metrics.\\\" \\n\\nA neural network audio codec is still a neural network, and is governed by the same principles as any other neural network. \\n\\n\\n>To clarify, MOSNet is a neural network model trained to correlate with human MOS ratings, but its estimates are influenced by the quality and characteristics of its training data - which may introduce some inaccuracies or biases. \\nGiven these well known issues with MOSNet, it is worth considering how including this metric bolsters your arguments. Uncharitable readings of the presented evaluation could suggest examples were cherry-picked for the subjective listening test and that overall performance is lacking. Regardless, the rest of the objective evaluation makes a convincing case that TAAE outperforms the selected methods.\\n\\n\\n>>We re-ran the perceptual tests using a MUSHRA setup and included SpeechTokenizer as an additional baseline.\\nThank you for addressing this. For future reference, MUSHRA tests are often followed by a significance test, such as a Post hoc Tukey HSD ANOVA, to observe whether ratings significantly differ from one another, or if differences are the result of random chance. Some type of error bar or standard deviation would help readers quickly glean the performance of your algorithm compared to others. \\n\\nIn light of your follow-up evaluation, I have raised my rating of the paper.\"}", "{\"title\": \"Response to reviewer 1mKc (Part 2/2)\", \"comment\": \">Q3: \\\"HuBERT can be as large as the proposed model (its X-Large version), and can turn into a codec with a vocoder.\\\"\\n\\nWe included an investigation of a HuBERT + HiFi-GAN Vocoder model as a baseline (unitHiFi-GAN). However, we were unable to find pretrained vocoder models compatible with HuBERT-XL and had to rely on a publicly available release using the smaller HuBERT-base (95M). The results of this investigation are presented in **Appendix A.7**.\\n\\nWe did not initially consider this as a baseline, as it is arguably a conditional generative model rather than a codec, and should be evaluated from a different perspective. The HuBERT-based codec approach benefits from its semantic representations learned through self-supervised objectives. However, it may introduce trade-offs, such as the loss of acoustic details and timbre in the resynthesized audio and would suffer greatly in objective metrics, which was confirmed by our results. Additionally, we updated the unitHiFi-GAN reconstructed speech examples on our anonymous website for reference. While the unitHiFi-GAN model generates plausible speech, it struggles to preserve speaker identity.\\n\\nThese limitations could potentially be addressed by incorporating discrete tokens from additional speech models (e.g., for pitch tracking or speaker classification), and the HuBERT-XL (1B) model might outperform the HuBERT-BASE (95M) model in this context. However, our work focuses on developing an end-to-end waveform codec model designed to achieve high-quality reconstruction while maintaining low-bitrate compression, distinguishing our approach from HuBERT-based methods.\\n\\n| Model | BPS | SI-SDR \\u2191 | Mel \\u2193 | STFT \\u2193 | PESQ \\u2191 | STOI \\u2191 | MOSNet \\u2191 |\\n|---------------|------|-----------|--------|--------|--------|--------|----------|\\n| unitHiFi-GAN | 500 | -45.95 | 3.14 | 3.24 | 1.12 | 0.16 | 2.98 |\\n| TAAE | 400 | 3.18 | 0.97 | 1.35 | 2.96 | 0.90 | 3.36 |\\n| TAAE | 700 | 4.73 | 0.86 | 1.26 | 3.09 | 0.92 | 3.36 |\\n\\n\\n>Q4: \\\"The paper provides various tips and useful information about their model training, but they are scattered in different places without a clear organization.\\\"\\n\\nWe have done some re-organization of the paper to hopefully make the presentation more consistent. The \\u2018Architecture\\u2019 section now only contains high-level discussion of the architecture, with more detailed discussion moved to the Appendices and quantitative information moved to the \\u2018Experiments\\u2019 section. We have also added a number of paragraphs allowing readers to understand which additional experiments were performed, and where to find them.\\n\\nThanks again for your efforts, please let us know if there are any other changes that would improve the paper in your view.\"}", "{\"summary\": \"This paper proposed TAAE, an audio codec model that uses Transformer encoders as the main building block to replace conventional convolution-based modules. To accommodate the choice, TAAE performs downsampling mainly by patchifying the time domain signal and training transformer encoder stacks on top of the downsampled sequence. For discretizing audio, TAAE relied on FSQ-based bottleneck that approximates continuous low-dimensional latent numerically. Experiment results show TAAE achieved outstanding speech quality on autoencoding at a significantly lower bit rate comparing to existing models.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"- The idea of using Transformer and the main architecture for the neural audio codec learning is novel and well executed.\\n- Judging from the audio samples on the demo page and MOS study, TAAE is clearly state-of-the-art in low bit rate speech compression.\\n- This paper provided a lot of detailed knowledge, empirical findings, and engineering improvements that can truly benefit the audio codec research community. I personally learned a lot in the details such as the discussion on systematic bias of discriminator, choice of filterbank, observation on the latent representation of silence frames with self-attention, etc.\\n\\n---\\n\\n### Justification for rating\\nOverall, I believe this work is novel enough and provides solid contributions to the field.\\nHowever, some improvement might be necessary (see weaknesses below).\\nIf the authors can properly address these concerns and update the submission accordingly, I would be more than happy to raise my rating on this paper.\", \"weaknesses\": \"Given the main contribution of this work is in exploring an alternative architecture for codec models, completeness in terms of design details and reproducibility are expected. In contrast, I found a lot of details missing or vague. (Although the authors state the code will be released later, the paper itself should still be comprehensive alone.) Here are some examples:\\n\\n---\\n\\n> ($\\\\S$2.1) ... Instead we raise the $\\\\epsilon$ constant used in the calculation of normalization factors in the layer norm blocks ... allows convergence of the architecture.\\n\\nThis appears to be an interesting observation and a critical hyper-parameter for training the model as the authors spent a paragraph discussing it, but neither the exact value nor the study/experiment on $\\\\epsilon$ is provided.\\n\\n---\\n\\n> ($\\\\S$2.4)...For training the codec itself, we primarily use a normalized feature-matching L1 loss on the per-layer features of the discriminator network ... In addition we found it beneficial to include a traditional L1 reconstruction loss to boost convergence at the beginning of the training process ...\\n\\nThe overall objective of the model is not explicitly given but described in a more hand-wavy style instead, which could easily lead to misunderstanding. The full objective should be listed explicitly together with the weight/importance for each term/component in the main paper or appendix.\\n\\n---\\n\\n> ($\\\\S$2.1) ... The self-attention uses a sliding window of size 128, to restrict receptive field and aid generalization of the architecture to arbitrary length sequences. \\n\\nThis choice appears as one simple sentence, but self-attention is the key difference between TAAE and prior works, which changes the properties of the model dramatically.\\nIf my understanding is correct, this means the receptive field of the first layer is already 2.56 seconds (128 frames $\\\\times$ 20 ms-per-frame), and the number doubles for every layer. It is obvious that TAAE has a much larger receptive field size comparing to convolution-based models. While this is an advantage, it could also lead to some problems that are not discussed in the paper.\\n \\n- What is the trade-off between length generalization and sliding window size for TAAE? How do time complexity and empirical inference time change accordingly? How do these numbers compare to those of CNN-based models?\\n- Beyond length generalization, can TAAE perform streaming encoding/decoding (as most of the existing works compared in this paper)?\\n - If so, what is the size receptive field? how does it affect the latency of the codec? how does it compares to conventional CNN-based codec models?\\n - If not, this should still be explicitly discussed as a limitation of the proposed framework in the paper.\\n\\nThese are just some examples. In short, I believe the fundamental differences between TAAE and CNN-based codec models should be discussed in the paper more throughout and carefully. Both advantages and disadvantages should be clearly stated and summarized in the main body of the paper.\\n\\n---\\n\\nI believe these concerns can all be addressed without additional training, thus should be easy enough to complete within the rebuttal period.\", \"questions\": \"(please see weaknesses above)\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Any further concerns?\", \"comment\": \"Hi reviewer 1mKc,\\nThanks for updating your score! If you have any remaining concerns with the work, please feel free to share and we can discuss.\"}", "{\"title\": \"General comments to all reviewers\", \"comment\": [\"Dear reviewers,\", \"We have now uploaded a new version of the paper with a significant amount of new content that we believe should address the majority of your comments. We've replied to each of you individually to give specific replies to your concerns, but we also wanted to highlight here the new content suggested by reviewers.\", \"We performed an ablation showing that the performance of the TAAE scales clearly with parameter count. This is described in **Appendix A.2**.\", \"We trained a causal finetune of the presented model. This demonstrates minor performance degradation compared to the main presented model but is competitive with the strongest streaming-focused baseline. This is described in **Appendix A.4**.\", \"We re-ran the subjective perceptual test with a more conventional MUSHRA format, and the addition of SpeechTokenizer as an extra baseline. The results are consistent with those presented in the previous version of the paper.\", \"We tested the generalization of the presented model and baselines to unseen languages, by evaluating objective metrics on the Multilingual LibriSpeech dataset. The model shows strong generalization performance, reaching parity or outperforming models trained on multi-lingual or general audio data. This is presented in **Appendix A.5**.\", \"We tested the generalization of the model and baselines to utterances of a variety of lengths. This is presented in **Appendix A.6**\", \"We performed a comparison with an alternative style of codec model, constructed using a pretrained HuBERT semantic model a HifiGAN-derived vocoder model. This is presented in **Appendix A.7**.\", \"We performed entropy coding experiments on the model and baselines. This revealed that our model and most baselines exhibited high codebook utilization and, and did not benefit strongly from entropy coding. This is presented in **Appendix A.8**.\", \"We measured RTF for the proposed model and baselines, showing that despite a much larger parameter count, inference time for the TAAE is competitive with baselines. This is presented in **Appendix A.9**\", \"We added further clarification about the training objective in **Sec.2.4**, and clarified many hyperparameters in **Sec.3.2**\", \"We added discussion of the receptive field, causality and latency of the presented model and baselines as well as additional discussion about the merits of convolution vs attention. This is presented in **Appendix B2--B3**\", \"We added new audio examples demonstrating the generalization of the model to unseen languages and also demonstrating the causal variant of the model. These can be found at the following link: https://taae-iclr-2025.github.io/taae_anonymised/\", \"Thank you for working with us on making this paper the best it can be! We very much appreciate your input.\"]}", "{\"summary\": \"This work describes an approach to leverage scaled transformer model architecture to achieve low-bitrate and high-quality speech coding. Different from conventional neural audio codec approaches, this work leverages the finite scalar quantization (FSQ) based bottleneck to achieve high codebook utilization rate and avoid the difficulty in training a VQ-VAE auto-encoder. Experimental results show this works outperform existing baselines in both objective and subject tests.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"This paper is well written and very clear to follow. In the introduction part, it clearly presents the motivations and has an excellent survey of the existing methods.\\n\\nThough using transformers to scale and leverage FSQ for high codebook utilization is not something new, this paper presents the motivations of these changes, the associated challenges and their mitigations. This paper also introduces a new method so that FSQ can be used in a similar way as RVQ where a varying bits-per-second rate can be achieved. \\n\\nThis paper presents strong experiment results, significant improving over the existing baselines (but at the cost of increased computation and latency).\", \"weaknesses\": \"If I understand the proposed model correctly, it is based on transformer layer with a local attention of 128 (both left and right), which means different from DAC/Encodec/Mimi etc which use causal encoders, the encoder in the proposed method is not causal, and it will introduce a latency up to the patch length (which is 320/16k ~ 20ms?). It would be great if the author can present the results with causal encoder so that it can be compared with DAC/Encodec/Mimi in a relative fair comparison (apart from the model size difference).\", \"questions\": \"N/A\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The authors present an encoder-decoder transformer-based architecture for encoding speech at very low bitrate, below 1kbps. The Transformer Audio AutoEncoder (TAAE) uses a polyphase time-frequency representation to perform downsampling/upsampling before the enocder and after the decoder of TAAE. Finite Scalar Quantization (FSQ) is employed within the encoder/decoder bottleneck to mitigate codebook underutilization typically seen with vector quantized (VQ) and residual vector quantized (RVQ) approaches. The authors combine an L1 discriminator feature loss with decaying L1 waveform loss and late perceptual reconstruction loss for training. The TAAE is trained on 105k hours of English speech sampled at 16kHz. The reconstruction capability of TAAE is compared to the Descript Audio Codec (DAC), Encodec, SpeechTokenizer, SematiCodec, and Mimi. A mean opinion score (MOS) is also produced from a perceptual evaluation comprised of 23 participants comparing TAAE to Mimi and SemantiCodec. The authors demonstrate that TAAE obtains better reconstruction performance according to both objective measures and MOS. The authors also demonstrate that one variant of the TAAE codebook attains 98% utilization.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"Originality:\\nThe results presented in Appendix B are enlightening regarding the use of non-power-scaled FFT sizes. The FSQ-based bottleneck also seems to overcome common issues in the training of RVQ systems.\", \"quality\": \"The authors provide a wide variety of objective assessments for their architecture's performance. The authors also do a good job of citing current literature.\", \"clarity\": \"The authors very clearly describe their architecture and the motivations for their architectural decisions. The appendices were well organized and helpful.\", \"significance\": \"Appendix B and the FSQ bottleneck are worthwhile contributions.\", \"weaknesses\": \"I do not think it is novel to scale the parameter count of a neural network autoencoder and demonstrate better compression ratios/reconstruction compared to smaller architectures. This is a well known result.\\n\\nI do not think it is novel to restrict the domain of a neural network autoencoder in comparison to another architecture trained on a more general domain and demonstrate better compression ratios/reconstruction. This is a well known result.\\n\\nBy restricting the domain of their speech audio corpus to English speech, the authors have produced an English speech audio codec. In order to claim that this is a \\\"speech codec,\\\" the authors should evaluate on non-English speech to demonstrate generalization capabilities. \\n\\nI do not think DAC, Encodec, and SemantiCodec are reasonable baselines to compare to, as none claim to be English speech codecs. \\n\\nMimi focuses on streaming and causality with 1/10 the parameter count of TAAE, which makes no claims regarding streaming capability. This also leads to an odd comparison as the goals of Mimi and TAAE are not aligned. \\n\\nIt is unclear why SpechTokenizer was left out of the perceptual evaluation, as it is the most comparable to TAAE in terms of architecture and training domain. Comparison to SpeechTokenizer could also boost claims that the FSQ significantly outperforms RVQ schema. \\n\\nI think the presented MOS scores are confusing, as MOSNet has estimated the MOS closer to 3 than to 5. A MUSHRA evaluation should have been used instead for pairwise comparison between codecs and is standard in the literature cited in this paper. Furthermore, the authors should include demographic breakdowns of the perceptual evaluation, as well as a description of the listening setup, as is standard for speech codecs.\", \"questions\": \"Why was SpeechTokenizer not included in the perceptual evaluation?\\n\\nHow do the design goals of Mimi align with that of TAAE? Why is Mimi a good baseline comparison to TAAE?\\n\\nWhy was a MOS evaluation chosen instead of MUSHRA?\\n\\nHow do you explain the gap in your perceptual evaluation MOS score and the estimate provided by MOSNet? \\n\\nWho was included in the perceptual evaluation, and what was their listening setup?\\n\\nHow does TAAE perform on non-English speech? And how does that compare to the more generalist NAC?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to reviewer Baq6 (Part 2/3)\", \"comment\": \">Q3: By restricting the domain of their speech audio corpus to English speech, the authors have produced an English speech audio codec. In order to claim that this is a \\\"speech codec,\\\" the authors should evaluate on non-English speech to demonstrate generalization capabilities. How does TAAE perform on non-English speech? And how does that compare to the more generalist NAC?\\n\\nWe\\u2019ve conducted an extensive set of generalization tests using Multilingual LibriSpeech, with results presented in **Appendix A.6**. The performance remains stable, showing no significant degradation when processing unseen languages. The performance of TAAE is either better than or comparable to models trained on multiple languages. This finding highlights TAAE\\u2019s potential for even greater performance when trained on multilingual data. We appreciate your suggestion to explore this further. Additionally, multilingual audio examples have been updated on the anonymous website.\\n\\n>Q4: Mimi focuses on streaming and causality with 1/10 the parameter count of TAAE, which makes no claims regarding streaming capability. This also leads to an odd comparison as the goals of Mimi and TAAE are not aligned. How do the design goals of Mimi align with that of TAAE? Why is Mimi a good baseline comparison to TAAE?\\n\\nThe main reason for choosing Mimi as a baseline is simply that it is the model which previously demonstrated the best reconstruction performance at very low bit-rates, outperforming other non-streaming speech/audio codecs in this range. We agree that the design goals are not aligned, but as stated before we are restricted by what models are available publicly. It is also an interesting point of comparison as it allows us to contrast a fairly traditional CNN-based codec with transformer blocks in the bottleneck with an architecture that relies mainly on patching and transformers.\\n\\nIn order to make this comparison more meaningful we added a new experiment, described in **Appendix A. 4**. In this experiment we finetuned the main presented version of TAAE to be fully causal, using causal convolution and causal attention windows. The results show that this causal TAAE is marginally degraded compared to our non-causal version, and outperforms Mimi in some objective metrics, despite being trained with significantly fewer steps and data hours. Additionally, audio examples generated by the causal TAAE model have been updated on the anonymous website.\\n\\n\\n| **Model** | **BPS** | **SI-SDR \\u2191** | **Mel \\u2193** | **STFT \\u2193** | **PESQ \\u2191** | **STOI \\u2191** | **MOSNet \\u2191** |\\n|-----------------------|---------|--------------|-----------|------------|------------|------------|--------------|\\n| Mimi | 1100 | 2.20 | 0.94 | 1.31 | 3.01 | 0.90 | 3.24 |\\n| TAAE (causal) | 700 | 4.04 | 0.94 | 1.31 | 3.09 | 0.92 | 3.34 |\\n| TAAE (non-causal) | 700 | 4.73 | 0.86 | 1.26 | 3.09 | 0.92 | 3.36 |\"}", "{\"summary\": \"The paper presents a new Transformer architecture for speech coding. It is characterized by the new scalar quantization method performed in a dimension-reduced space, which showed improved coding gain compared to other methods that are based on residual vector quantization. The paper also provides various training strategies that appear to be useful for neural codec training.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"The proposed system appears to work well according to the objective metrics and subjective tests.\", \"The proposed FSQ idea seems to be a solid quantization option, improving the codebook utilization.\", \"The authors put a lot of effort in making it more scalable by adding multiple levels of quantization.\"], \"weaknesses\": [\"The proposed method relies on the dimension reduction part for its dimension-specific scalar quantization to work. And that's why they could achieve higher codebook utilization. Meanwhile, there is also a trend that higher codebook utilization leads to lower coding gain if entropy coding is applied after tokenization. Indeed, the paper does not mention anything about Huffman coding results, which the proposed method might not be able to take advantage of due to the low dimensionality and high codebook utilization. At the same time, the RVQ-based ones might have a better chance of compressing more via Huffman coding. I wish the paper provided an in-depth discussion about it. In my opinion, all the coding gain and performance-related arguments must be based on the entropy-coded bitrates of all codecs mentioned.\", \"The other main criticism is that the proposed model is just a lot bigger than the other models. I don't mean to argue that a bigger codec necessarily results in a better coding gain, but in general, it is also true that there is a relation. I wish the paper had provided an ablation test that investigated the impact of the different sizes of the proposed model.\", \"The paper provides various tips and useful information about their model training, but they are scattered in different places without a clear organization.\"], \"questions\": \"- How does it compare to the HuBERT-based codec? HuBERT can be as large as the proposed model (its X-Large version), and can turn into a codec with a vocoder attached to it as shown in [a].\\n\\n[a] A. Polyak et al. \\u201cSpeech Resynthesis from Discrete Disentangled Self-Supervised Representations,\\u201d Interspeech 2021\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
4XHyThqt1C
Alternating Optimized Stochastic Vector Quantization in Neural Compression
[ "Runsen Feng", "Weiping Li", "Zhibo Chen" ]
In neural compression, vector quantization (VQ) is usually replaced by a differentiable approximation during training for gradient backpropagation. However, prior approximation methods face two main issues: 1) the train-test mismatch between differentiable approximation and actual quantization, and 2) the suboptimal encoder gradients for rate-distortion (RD) optimization. In this paper, we first provide new finds about how approximation methods influence the RD optimization in neural compression, and then propose a new solution based on these finds. Specifically, if a neural compressor is regarded as a source-space VQ, we find that the encoder implicitly determines the quantization boundaries, and the decoder determines the quantization centers. Suboptimal approximation methods lead to suboptimal gradients for RD optimization of quantization boundaries and centers. Therefore, to address the first issue, we propose an encode-decoder alternating optimization strategy. The encoder is optimized with differentiable approximation, and the decoder is optimized with actual quantization to avoid the train-test mismatch of quantization centers. To address the second issue, we propose a sphere-noise based stochastic approximation method. During encoder optimization, VQ is replaced with a uniform sphere noise centered at the input vector. When the input vector is located at the quantization boundary, the encoder gradient is closer to the difference in RD loss between adjacent quantization centers, facilitating better encoder optimization. We name the combination of optimization strategy and approximation method as Alternating Optimized Stochastic Vector Quantization. Experimental results on various vector sources and natural images demonstrate the effectiveness of our method.
[ "vector quantization", "neural compression", "image compression" ]
https://openreview.net/pdf?id=4XHyThqt1C
https://openreview.net/forum?id=4XHyThqt1C
ICLR.cc/2025/Conference
2025
{ "note_id": [ "kHkBWyq5YR", "iFUqLL6VyZ", "gAbT81rWcD", "WfS6lLq5Ry", "0JwPKYZ5mA" ], "note_type": [ "official_review", "official_review", "comment", "official_review", "official_review" ], "note_created": [ 1731089034372, 1730196176921, 1731679910700, 1731064911294, 1730710682648 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission14210/Reviewer_am93" ], [ "ICLR.cc/2025/Conference/Submission14210/Reviewer_X4Nj" ], [ "ICLR.cc/2025/Conference/Submission14210/Authors" ], [ "ICLR.cc/2025/Conference/Submission14210/Reviewer_GzUa" ], [ "ICLR.cc/2025/Conference/Submission14210/Reviewer_udHP" ] ], "structured_content_str": [ "{\"summary\": \"This paper investigates an improvement to the STE method used to train VQ-based neural compressors. For scalar quantization methods, the uniform additive noise method during training is shown to yield smooth gradients. This is not applicable to VQ-based methods, which so far mostly use STE. This is shown to yield highly non-smooth gradients. The proposed method, for VQ-based models, uses an alternating optimization scheme, combined with stochastic VQ. This is shown to yield smoother gradients than STE. Experimental results demonstrate superiority over STE-based VQ neural compressors.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The problem of train-test mismatch and other issues of STE in VQ-based models is relevant and timely\", \"The proposed method appears principled, and solves some of the challenges that are presented\", \"The work is overall well-motivated, and easy to follow\"], \"weaknesses\": [\"In sections 1-2, the problem is presented well, i.e., the need to solve some issues brought forth by STE in VQ-based compressors. However, section 3 dedicates a lot of explanation to how it is solved in scalar quantized neural compressors, which, to me, appears less important. In 3.2, I think it would be helpful to directly mention the VQ-STE section, as that is the setting which this paper's proposed method attempts to improve on. The UQ-AUN and UQ-STE can be mentioned briefly and details put in the appendix, as the scalar quantization setting is not the focus of the paper. This would provide more space to explain some of the details of the proposed method in section 4, which I found to be lacking. In addition, Figure 6 could be placed in section 4, and the reader can directly contrast that with Figure 4, and see how the non-smoothness issue is fixed via the proposed method.\", \"The experimental results section covers a broad range of sources, both synthetic and real-world, which is helpful. It is shown that the proposed method outperforms VQ-STE in all settings, and the UQ-AUN method provides a frame of reference. However, some baselines are missing. For example, the two methods soft-toward vector quantization (A2) and probabilistic vector quantization (A3) used in the ablation study (lines 509-511) should also be its own baselines with the Balle et al 2018 transforms. This is useful for understanding how the proposed method compares with other methods that don't use STE. Moreover, these baselines are mentioned in the related work but not compared to.\", \"In the related work, lines 138-140, it is said that section 3.2 addresses how prior works in VQ-based neural compression yield sub optimality. However, in the VQ setting, only the STE method from VQVAE is addressed. The method from Agustsson et al, 2017, and Zhu et al 2022 are not addressed in section 3.2. It would be helpful to understand how these two methods' gradients look like in the 1-d Gaussian setting. This, combined with a BD-rate comparison in the results section, would help the reader understand how all the methods compare (conceptually and performance-wise), and strengthen the work overall.\", \"Furthermore, the experimental results of the proposed method on natural images use a fairly old architecture (which, to my understanding, uses transforms from Balle et al 2018, single-layer vector quantizer, and a discrete entropy model from VQVAE). There are more recent transforms that are higher-performing, such as those from [1], as well as vector quantizer layers, such as those from [2] and [3]. Experiments using these models would be more convincing. The authors say the proposed method cannot be used on more state-of-the-art models such as these. If true, I think that limits the applicability of the proposed method.\", \"There are some issues with the references in the related work, in the second paragraph.\"], \"references\": \"[1] El-Nouby, Alaaeldin, et al. \\\"Image compression with product quantized masked image modeling.\\\" arXiv preprint arXiv:2212.07372 (2022).\\n\\n[2] Feng, R., Guo, Z., Li, W., & Chen, Z. (2023). NVTC: Nonlinear vector transform coding. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 6101-6110).\", \"questions\": [\"For the natural image setting with the proposed method, are the transforms from Balle et al 2018, and entropy model the discrete entropy model from VQVAE?\", \"Why can the proposed method not be applied to architectures like NVTC [1] or PQ-VAE [2]? This is not explained, and it seems like the proposed method could be used on these architectures.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes an alternating optimization method that incorporates stochastic quantization to improve the quantization process of nonlinear transform coding (NTC). The paper clearly formulates the optimization problem of NTC from the perspective of vector quantization, *i.e.*, the optimization of boundaries and codewords. Experiments on low-dimensional sources and natural images show that the proposed method outperforms the classical NTC method equipped with additive uniform noise and straight-through estimator on image compression.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper is overall well written and easy to follow.\\n2. The authors provide a clear framework for analyzing the gradient approximation problem of NTC and propose a method for solving it based on the characteristics of vector quantization.\", \"weaknesses\": \"1. The motivations and advantages of employing a uniform sphere distribution are hard to understand. The uniform quantizer with additive uniform noise also approximates the encoder gradient with the difference in RD loss between adjacent quantization centers (which is the main advantage of the uniform sphere distribution), as shown in Eq. (4).\\n\\n By the way, I noticed that the proposed method uses a learnable multidimensional codebook instead of a fixed codebook of uniform quantizers. However, such a gap can be reduced by the nonlinear transforms (for flexible boundaries and codebook in the source space) and conditional coding (for redundant multidimensional signals).\\n\\n2. The importance of the proposed method seems to be limited. Vector quantization and conditional coding (*e.g.*, spatial auto-regression [R1] and channel-wise auto-regression [R2]) are two kinds of methods that solve the high-dimensional coding problem of latent representations, and the latter one is more prevalent in existing methods. Theoretically, the proposed alternating method can be used in both vector quantization and conditional coding. However, the authors only offer the results for vector quantization. It is better to evaluate the contribution of the proposed method by integrating it with state-of-the-art conditional coding methods, such as ELIC [R3] and TCM [R4].\\n\\n [R1] D. Minnen, J. Ball\\u00e9, and G. D. Toderici. Joint autoregressive and hierarchical priors for learned image compression, In *Advances in Neural Information Processing Systems (NeurIPS) 31*, 2018, pp. 10771-10780.\\n\\n [R2] D. Minnen and S. Singh. Channel-wise autoregressive entropy models for learned image compression. In *2020 IEEE International Conference on Image Processing (ICIP)*, 2020, pp. 3339-3343.\\n\\n [R3] D. He, *et al.* ELIC: Efficient learned image compression with unevenly grouped space-channel contextual adaptive coding. *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)*, 2022, pp. 5718-5727.\\n\\n [R4] J. Liu, H. Sun, and J. Katto. Learned image compression with mixed transformer-cnn architectures. *Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR)*, 2023, pp. 14388-14397.\\n\\n3. Contributions on interpreting neural compression as vector quantization should be clarified. There has been work (Ball\\u00e9 *et al.*, 2020) that reveals the relationship between the source domain and the latent representation. Although this paper is cited by the authors in their related work, the relationship and contributions of the two papers are not clarified.\\n\\n4. Several details should be clarified in the manuscript to ensure that the paper is self-contained.\\n\\n - The implementation of vector quantization in the latent space, which is crucial to better understand the contribution of the proposed method.\\n\\n - The definition on the uniform sphere distribution.\\n\\n I note that there are two different definitions of hypersphere, with a difference in whether the points with a distance less than the radius are considered part of the hypersphere. It is suggested that the authors provide a clear definition.\\n\\n (Additional) 2 definitions, with the latter one be the case of this paper:\\n\\n a) The $(k-1)$-sphere with a radius $R$ is the set of points $[x_1, x_2, \\\\cdots, x_k]$ with $\\\\sum_{i=1}^kx_i^2 = R^2$.\\n\\n b) The $k$-dimensional hypersphere with a radius $R$ is the set of points $[x_1, x_2, \\\\cdots, x_k]$ with $\\\\sum_{i=1}^kx_i^2\\\\leqslant R^2$.\\n\\n5. Typos:\\n\\n - There are several omitted citations in the second paragraph of Section 2.\\n\\n - There is a redundant comma after \\u201ce.g.,\\u201d in Line 99.\\n\\n - The references are not cited with proper commands. Some of the citations need to be replaced by `\\\\citep` instead of `\\\\citet`.\\n\\n - There is an unnecessary bracket after $\\\\mathbf{\\\\mathit{y}}$ in Line 353.\", \"questions\": \"1. What is the main advantage of using noise that follows a uniform spherical distribution over conventional additive uniform noise?\\n2. In the figures, the encoder transform of UQ-STE (Figure 3) includes the quantizer while that of UQ-AUN (Figure 2) does not. Why?\\n3. What's the definition of hypersphere in this paper?\\n4. Why the prior is optimized together with the decoder instead of the encoder in the alternate optimization? The distribution of the codeword is determined only by the boundaries, which are determined by the encoder.\\n5. How to guarantee $\\\\Vert\\\\mathbf{\\\\mathit{y}}-\\\\hat{\\\\mathbf{\\\\mathit{y}}}\\\\Vert= \\\\Vert\\\\mathbf{\\\\mathit{y}}-\\\\mathbf{\\\\mathit{e}}_i\\\\Vert=\\\\Vert\\\\mathbf{\\\\mathit{y}}-\\\\mathbf{\\\\mathit{e}}_j\\\\Vert$ for the vector quantizer?\\n6. Is the proposed alternating optimization method applicable to other NTC models, including those with uniform quantizers?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\", \"comment\": \"We gratefully thanks for the reviewer's comments and suggestions.\\n\\nAlthough the current version of the paper has many weaknesses, we would like to emphasize that : \\n(1) the alternative optimization strategy is important to reduce train-test mismatch in decoder and codebook optimization.\\n(2) the sphere-noise based approximation is important to provide a smoother and more optimal gradient for encoder optimization.\\n\\nIn the future, we plan to revise the paper as follows:\\n(1) We will extend our method to multi-layer quantization architectures (e.g., hyperprior model, ELIC, NVTC, etc.). The quantization boundaries and centers in these models are more complex, which will require additional effort in method design.\\n(2) We will reorganize the structure of the paper and provide more detailed gradient analysis for other VQ-based approximation methods.\\n(3) We will conduct an ablation study on applying the alternative optimization strategy to existing NTC methods with scalar quantization and compare it with mixed quantization strategies and decoder fine-tuning strategies.\\n(4) We will include more implementation details, such as network structures and descriptions of the entropy model.\\n(5) We will revise any typographical errors.\\n\\nThank you once again for taking the time to review\"}", "{\"summary\": \"In this paper, the authors propose an optimization strategy for vector quantization in neural compression. Since quantization is non-differentiable, they approximate the vector quantization error using noise sampled from a uniform spherical noise distribution. Additionally, they introduce an optimization strategy to effectively minimize the rate-distortion loss function in neural compression. The authors tested their method on simulated data sources and several real-world images, demonstrating that their approach provides better compression efficiency compared to existing vector quantization methods.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"1. An alternative optimization procedure to optimize the encoder network, the codebook of vector quantization, and the decoder network. This procedure could result in better convergence of the RD loss function.\\n2. An approximation of vector quantization using uniform spherical noise centered on the latent vector.\\n3. A gradient analysis of the encoder latent with respect to the loss function.\\n4. Deriving the correspondence between vector quantization in the latent space and the corresponding quantization in the image space.\", \"weaknesses\": \"1.The paper is not well-written and is incomplete in several sections. In the related work section, citations are missing, and sentences are incomplete, making it difficult to relate the written content to the prior art. Few of the papers in the reference are repeated.\\n\\n2. The evaluation of the proposed vector quantization is limited. The authors have only experimented with a low-complexity autoencoder using a single layer. Consequently, the impact of the proposed method on neural compression is limited. The authors should utilize recent state-of-the-art variational autoencoder-based neural image compression methods, such as [1] and [2], and apply the proposed vector quantization to the latent space of these advanced methods. When the encoder and decoder are more powerful, the impact of vector quantization on reducing the bitrate might be lower than what is shown in the paper.\\n [1] Cheng et. al, Learned Image Compression with Discretized Gaussian Mixture Likelihoods and Attention Modules, CVPR 2020\\n [2] He et.al, ELIC: Efficient Learned Image Compression with Unevenly Grouped Space-Channel Contextual Adaptive Coding, CVPR 2022.\\n\\n3. The details of the network architecture are missing from the paper.\\n\\n4. The alternative optimization strategy is well-established in the vector quantization literature, where the codebook is fixed while optimizing the encoder and decoder. Additionally, in neural compression, some prior works [3] perform fine-tuning of the decoder using the quantized latent \\\\hat{y}\\u200b, showing that optimizing the decoder with the quantized latent improves compression efficiency and reduces the train-test set mismatch. The citations are missing.\\n [3] Zongyu Guo et.al, Soft then Hard: Rethinking the Quantization in Neural Image Compression, ICML 2021\\n\\n5. The citations to the related work (baseline) are incorrect (e.g., in Table 1), making it difficult to review the paper.\", \"questions\": \"1. What is the single-layer factorized model? Is it the encoder with a single layer, or is it the factorized entropy model with a single layer? The description of the network architecture is not clear in the paper.\\n\\n2. Please provide more details on the optimization of quantization boundaries. When the codebook is fixed, the decoder network and the entropy model are fixed, and the quantization boundaries depend on the codebook centers. How are the boundaries defined? Is it with respect to nearest-neighbor-based partitioning? When the encoder is optimized, the encoder might move the latent into a different partition. Is this what is meant by the optimization of quantization boundaries?\\n\\n3. The rate of the baseline methods is controlled by adjusting the codebook sizes. Why is the entropy model not used for the baseline methods in the comparison? Even though the baseline methods do not consist of the entropy model, it is better to include the entropy model. The BD-rate gain for the proposed method could also come from the use of the entropy model, in addition to the proposed vector quantization method. The baseline method with the entropy model might also have similar results to the proposed method. If the baseline method also includes the entropy model, it will be easier to quantify the improvement of the proposed vector quantization.\\n\\n4. In Table 1, for the baseline UQ-AUN (Factorized model Balle et al. (2018b)), is the hyper-prior entropy model used, or is the citation incorrect? In the text, it is written as the factorized entropy model, but it is cited with the hyper-prior entropy model: Johannes Balle, David Minnen, Saurabh Singh, Sung Jin Hwang, and Nick Johnston. Variational image compression with a scale hyperprior. arXiv preprint arXiv:1802.01436, 2018b.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper addresses two main issues of vector quantization (VQ) approximation methods in neural compression. The paper proposes encoder-decoder alternating optimization strategy to address the train-test mismatch and stochastic sphere-noise based approximation technique for suboptimal encoder gradients for rate-distortion (R-D) optimization. Experimental results on synthetic sources and natural images demonstrate the effectiveness of the proposed method over previous VQ approximation methods in terms of R-D performance.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper is well written and easy to follow.\\n\\n2. The proposed stochastic vector quantization for encoder optimization approach is superior to the previous VQ+STE approximation method as well as the UQ+AUN method, as demonstrated in experiments.\", \"weaknesses\": \"1. The proposed encoder-decoder alternating optimization strategy is of less importance. Recent neural compression methods address the train-test mismatch issue in end-to-end training by adopting mixed quantization. That is using additive uniform noise for learning the entropy model but employing quantized latent when it is passed to the decoder. There is no evidence that the encoder-decoder alternating optimization strategy is better than the mixed quantization method. Moreover, as the authors illustrated, the proposed alternating optimization strategy is only applicable to single-layer quantization and unconditional entropy models, which leads to obviously degraded R-D performance.\\n\\n2. In the proposed stochastic vector quantization approach, the authors assume $q(\\\\tilde{y}|y)$ is a uniform sphere distribution centered at $y$. However, there is no theoretical evidence to support that this assumption is reasonable. \\n\\n3. In experiments:\\n\\n(1) For low-dimensional vector sources, it is not reasonable for the dimension of the latent-space vector to be the same as that of the source-space vector, as the primary task of the encoder is dimensionality reduction for feature extraction .\\n\\n(2) The specific structure of the entropy model of VQ-STE and the proposed method is not given. Due to the different entropy models, it is also unfair to compare the proposed method with UQ-AUN and UQ-STE.\\n\\n(3) The R-D performance of the proposed method is evidently worse than current state-of-the-art methods. It is even worse than BPG444.\", \"questions\": \"1. What are the advantages of the proposed encoder-decoder alternating optimization strategy over mixed quantization method?\\n\\n2. Could the authors theoretically prove that the assumption of $q(\\\\tilde{y}|y)$ being a uniform sphere distribution centered at $y$ is valid?\\n\\n3. Could the performance of the proposed model achieve state-of-the-art results\\uff1f\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}" ] }
4X9RpKH4Ls
Can Transformers Do Enumerative Geometry?
[ "Baran Hashemi", "Roderic Guigo Corominas", "Alessandro Giacchetto" ]
We introduce a Transformer-based approach to computational enumerative geometry, specifically targeting the computation of $\psi$-class intersection numbers on the moduli space of curves. Traditional methods for calculating these numbers suffer from factorial computational complexity, making them impractical to use. By reformulating the problem as a continuous optimization task, we compute intersection numbers across a wide value range from $10^{-45}$ to $10^{45}$. To capture the recursive nature inherent in these intersection numbers, we propose the Dynamic Range Activator (DRA), a new activation function that enhances the Transformer's ability to model recursive patterns and handle severe heteroscedasticity. Given precision requirements for computing the intersections, we quantify the uncertainty of the predictions using Conformal Prediction with a dynamic sliding window adaptive to the partitions of equivalent number of marked points. To the best of our knowledge, there has been no prior work on modeling recursive functions with such a high-variance and factorial growth. Beyond simply computing intersection numbers, we explore the enumerative "world-model" of Transformers. Our interpretability analysis reveals that the network is implicitly modeling the Virasoro constraints in a purely data-driven manner. Moreover, through abductive hypothesis testing, probing, and causal inference, we uncover evidence of an emergent internal representation of the the large-genus asymptotic of $\psi$-class intersection numbers. These findings suggest that the network internalizes the parameters of the asymptotic closed-form and the polynomiality phenomenon of $\psi$-class intersection numbers in a non-linear manner.
[ "AI for Mathematics", "Algebraic Geometry", "Theorem Discovery", "Transformers", "Recursive functions", "Interpretability Analysis and world model." ]
Accept (Poster)
https://openreview.net/pdf?id=4X9RpKH4Ls
https://openreview.net/forum?id=4X9RpKH4Ls
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zRDj6slahN", "t66yWswnRZ", "s89aUI1Z1k", "s778yaVXmF", "otpaJRGeGN", "mnXE8hS5UI", "gxjOObl1Jv", "exo9C2nJkK", "e3WCRwN4xj", "d0scgf5x3S", "YyIhFws2ro", "Yv9216J3g8", "XCK5AO76P0", "O2SRgd1Aps", "MIgYxiRXlN", "KJBmbuXu12", "GJADUSPOR7", "FlW39RvIqX", "BOuvH7ngD1", "4PbJSndLRZ", "0KEr22cjpP" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "meta_review", "decision", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732114886893, 1732099847164, 1732710289549, 1732543049278, 1730612268614, 1730390446969, 1732516001949, 1730667263622, 1732100475204, 1732548480680, 1732118165034, 1732515567273, 1732287257240, 1732061978027, 1730192419422, 1734642577089, 1737524086459, 1732118192743, 1732111431300, 1733225065378, 1732448578084 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission10874/Authors" ], [ "ICLR.cc/2025/Conference/Submission10874/Authors" ], [ "ICLR.cc/2025/Conference/Submission10874/Authors" ], [ "ICLR.cc/2025/Conference/Submission10874/Reviewer_daJv" ], [ "ICLR.cc/2025/Conference/Submission10874/Reviewer_daJv" ], [ "ICLR.cc/2025/Conference/Submission10874/Reviewer_Yn4i" ], [ "ICLR.cc/2025/Conference/Submission10874/Authors" ], [ "ICLR.cc/2025/Conference/Submission10874/Reviewer_ncfx" ], [ "ICLR.cc/2025/Conference/Submission10874/Authors" ], [ "ICLR.cc/2025/Conference/Submission10874/Authors" ], [ "ICLR.cc/2025/Conference/Submission10874/Authors" ], [ "ICLR.cc/2025/Conference/Submission10874/Authors" ], [ "ICLR.cc/2025/Conference/Submission10874/Reviewer_ncfx" ], [ "ICLR.cc/2025/Conference/Submission10874/Authors" ], [ "ICLR.cc/2025/Conference/Submission10874/Reviewer_qDDj" ], [ "ICLR.cc/2025/Conference/Submission10874/Area_Chair_gyDC" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission10874/Authors" ], [ "ICLR.cc/2025/Conference/Submission10874/Authors" ], [ "ICLR.cc/2025/Conference/Submission10874/Authors" ], [ "ICLR.cc/2025/Conference/Submission10874/Reviewer_Yn4i" ] ], "structured_content_str": [ "{\"title\": \"Response part 2 to Reviewer Yn4i\", \"comment\": \"**C3**: or from a machine learning point of view (it is not clear what is more interesting about these numbers than about any sequence in the OEIS, say). The experimental results are not particularly surprising given what is known about transformers, or at least I don't see it.\\n\\n**A3** Thank you for your insightful question. In the following we summarise our contribution to ML and AI for Science community and explain why modeling $\\\\psi$-class intersection numbers presents unique and important challenges beyond those associated with sequences found in the OEIS. We have discussed these points in Related Works, Methods sections and Appendix C. As elaborated for Reviewer daJv, we believe our work covers a wide range of topics:\\n\\n1. **Modeling Complex Recursive Functions:** We introduce enhancements to Transformer architectures to address the challenge of modeling recursive functions with high-variance factorial growth\\u2014a significant advancement beyond modeling periodic functions or simple sequences. The behavior of $\\\\psi$-class intersection numbers in enumerative geometry is characterized by recursive relationships with factorial blow-up, leading to sparse, dramatic, and high-variance fluctuations. This recursive nature introduces substantial complexity in modeling and accurately approximating these functions. **To best of our knowledge, there has been no prior work attempting to model functions with such recursive behavior and factorial growth.** This intrinsic property of enumerative geometric problems makes the computation and modeling of such entities extremely challenging. \\n\\n2. **Advancing Explainability Methods:** We provide explainability techniques in a non-trivial domain beyond standard language tasks, offering insights into how the model processes complex mathematical structures. Our interpretability analyses contribute to the (re)discovery of deep mathematical concepts that have taken over 30 years of research to develop, thereby bridging the gap between AI and advanced mathematics.\\n\\n3. **Emphasizing Uncertainty Quantification:** We advocate for the importance of uncertainty quantification in the AI for Mathematics community, highlighting its necessity for reliable and interpretable models. In the domains such as mathematical theorem proving, understanding the confidence and uncertainty of model predictions is very important.\\n\\n**C4**: It would be good if the authors could at least mention questions that would bring something interesting to the enumerative geometry (something feels interesting when they perform an analysis of the internal representation of the network, but it stops just before it gets interesting...).\\n\\n**A4** Thank you for your insightful question. In response, we have expanded the discussion on future research directions in the conclusion of the manuscript to highlight questions that may bring new insights to enumerative geometry. Briefly, our analysis of the internal representations of the network suggests several intriguing avenues for further exploration:\\n\\n- **Asymptotic behavior in High Genus and number of marked points:** An interesting open question is the behavior of $psi$-class intersection numbers in the regime where both the genus $g$ and the number of marked points $n$ tend to infinity while maintaining a bounded ratio $g/n$. Currently, there is no conjecture about the asymptotic behavior in this limit or how it depends on the partitions. Our interpretability methods might provide valuable hints or patterns that could lead to new conjectures or theoretical advancements in this area. We are currently working on this problem.\\n\\n- **New identities:** Recent developments, 2212.04256, reveal that certain partitions have vanishing coefficients in their decomposition into elementary symmetric polynomials, suggesting a deeper hidden structure. We believe that investigating the internal understanding of our network (e.g section 5.1) could provide further evidence in examining these new results and uncovering the underlying patterns.\\n\\nIf our comments resolve your concerns we would appreciate if you would consider raising your score!\"}", "{\"title\": \"Response part 1 to Reviewer daJv\", \"comment\": \"We thank the reviewer for their valuable and constructive feedback. We have clarified several elements of the paper in our updated version. Where possible, we have highlighted changes in the revised document.\\n\\nA1. **W1:** We agree and this is an important point, thank you! We have added a discussion of this to the Methods section and Appendix C of the manuscript. We also briefly discuss them here. Our primary motivation for employing Transformers stems from their ability to respect the symmetries and inductive biases present in each modality of our inputs. Specifically, the input tensor $B$ possesses a sparse graph or coordinate (COO) sequence structure. Transformers can effectively handle such structures due to their masked attention mechanisms and relative positional embeddings. Also, the input $d$ is permutation invariant. Transformers naturally accommodate this property. Additionally, Transformers are state-of-the-art models known for their flexibility and effectiveness in handling multi-modal data. Their architecture allows for the integration of different data types and modalities, which is important for our problem. Another advantage of using Transformers is the flexibility they offer in interpretability analyses. We chose Transformers not merely because they are popular network architectures, but because their inherent properties make them aligned with the structural characteristics of our data and problem.\\n\\nA2. **W2:** Thank you for highlighting this point. We acknowledge that the Related Work section was brief. We have updated the manuscript with more connective discussion and references. Our work spans various tasks and technologies within AI and pure Mathematics, making it really difficult to comprehensively cover all related works within the page limits of the venue. Therefore, we have focused on discussing the most seminal works and key contributions in the field of AI for Mathematics.\\n\\nA3. **W3:** Thank you for your insightful suggestion. We agree that the DRA is not the only method to capture periodic behavior , and we appreciate the opportunity to clarify this point. As a result of your comment, we have added a discussion at the beginning of the Methods section to address this matter. It's important to note that the behavior of $\\\\psi$-class intersections in enumerative geometry is not purely periodic but rather recursive with factorial growth. This recursive nature introduces significant complexity in modeling and accurately approximating these functions. Specifically, the functions exhibit sparse, dramatic and high-variance growth and drops due to their recursive properties. While various methods have been employed to capture periodic patterns effectively\\u2014particularly in time series prediction tasks\\u2014they typically deal with unimodal data exhibiting relatively low variance in both in-distribution (ID) and out-of-distribution (OOD) regions. These methods would not generalize well to datasets with the high variance and recursive behavior observed in our context. **To best of our knowledge, there has been no prior work attempting to capture such a recursive behavior with factorial blow-up characteristic** . This is an intrinsic property of enumerative geometric problems in mathematics, making the computation of such entities rare and extremely challenging.\\n\\nA4. **W4:** Fig. 1, indeed shows a comparison between MLPs with various non-linear activation functions and also the vanilla KAN model. We did this in order to demonstrate the abilities of DRA on a recursive toy dataset. We have tried to clarify this further in the updated manuscript.\"}", "{\"title\": \"Follow-Up on Review Feedbacks\", \"comment\": \"Dear Reviewers,\\n\\nWe sincerely appreciate the time and effort you have devoted to providing constructive feedbacks on our submission. Your insights have been incredibly valuable in helping us refine our work.\\n\\nIn our official comment above, we aimed to clarify our contributions. Beyond our impact on the Explainable AI, Experimental Mathematics and AI for Mathematics communities, **we emphasize that prior to this paper, there had been no attempts to capture recursive functions with factorial growth in any contexts. Our work with DRA activation function represents the first successful effort in this direction**.\\n\\nAs the manuscript updating phase is nearing its conclusion on November 27th, we wanted to kindly follow up to see if there are any additional questions, concerns, or points that we could clarify or address to further assist with your review process. We are more than happy to provide any additional information or details you might need.\\n\\nBest regards,\\n\\nThe Authors\"}", "{\"title\": \"Response\", \"comment\": \"Thanks for your response. I appreciate the additional information and statements provided by the authors. I think I agree with Reviewer qDDj: we need to have at least one expert in enumerative geometry and it seems that the topic of this paper does not fit ICLR venue.\"}", "{\"summary\": \"This paper introduced DynamicFormer to learn and predict the $\\\\psi$-class intersection numbers. Experiments include both in-distribution results and out-of-distribution results. The author also presented some experiments to illustrate how transformers perform enumerative geometry. Meanwhile, the authors also investigated whether the proposed method could perform abductive reasoning and hypothesis testing to estimate the parameters of asymptotic form for intersection numbers.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"The idea of using transformers to do enumerative geometry is new.\", \"Meanwhile, the authors proposed a new activation function, DRA, which found to be useful to improve the prediction performance.\", \"The authors compared DRA with other popular activations functions in Figure 1 and Table 2.\", \"Experiments show some evidence of transformers can learn to predict the $\\\\psi$-class intersection numbers.\", \"Meanwhile, the authors also presented a discussion on how transformers being able to achieve that by inspecting internal vector space of the model.\", \"The author also investigated how inputs affect the model\\u2019s understanding of $\\\\psi$-class intersection numbers and the parameters for large genus.\"], \"weaknesses\": [\"I am not an expert in \\\"enumerative geometry\\\". However, I think the paper lacks many important clarifications and discussions.\", \"The paper lacked discussion of the reasons/motivations of using transformers. At the moment, the paper seemed only a combination of a popular neural network architecture and a new mathematical problem.\", \"The \\\"Related Work\\\" section is quite weak at the moment: the authors spent only one paragraph to discuss related works and then summarized their contributions.\", \"From my perspective, the proposed DRA is not the only way to capture the periodic behavior in data. This lacks sufficient discussion in the paper.\", \"From experiments in Figure 1, the authors did not apply DRA to other neural network architectures (eg MLP), and provided readers with more discussions on that.\", \"Lack of theoretical discussion on the proposed method.\", \"Code is not available.\"], \"questions\": [\"I wonder why the authors choose transformers as the regression function?\", \"In Figure 1, have you tried to apply DRA to MLP or other potential neural networks?\", \"Will the code and the datasets be available?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The authors investigate the ability of transformers to compute the psi-intersection numbers in geometry and found that they perform (unsurprisingly) very well in distribution and quite well outside of distribution, and performed a series of analyses to understand which structures are being learned this way. They find in particular that the model learns the Dilaton equation and some information about the exponential growth of these psi-intersection numbers.\\n\\nOverall, this paper is written very carefully, with excellent explanations of what is being done, although the importance of some details is unclear (like: what do we need to know about psi-intersection numbers? most of the sophisticated formulae are not really used in a meaningful way). However I am not sure the work is very interesting from a geometric point of view (the interesting thing is to gain theoretical insight into what psi-intersection numbers, not get somehow numerically accurate estimates of them) or from a machine learning point of view (it is not clear what is more interesting about these numbers than about any sequence in the OEIS, say). The experimental results are not particularly surprising given what is known about transformers, or at least I don't see it. \\n\\nWhile this work can be viewed as a first step towards making progress in applying machine learning to enumerative geometry, and the carefulness of the writing and experiments should be commended, I don't think it brings a lot of interesting new informations about machine learning or enumerative geometry.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"2\", \"strengths\": \"The care and clarity of the writing, the fact that some extensive research has been done, the general trust in the results that this paper inspires.\", \"weaknesses\": \"What do we learn about machine learning or enumerative geometry? We seem to learn something that could be expected, a particular case of a general phenomenon.\", \"questions\": \"It would be good if the authors could at least mention questions that would bring something interesting to the enumerative geometry (something feels interesting when they perform an analysis of the internal representation of the network, but it stops just before it gets interesting...).\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer ncfx\", \"comment\": \"Thank you for your thorough and constructive feedback. We are pleased that our revisions have addressed the major concerns and appreciate your positive assessment of our work. Regarding the remaining minor concerns, we have just corrected them and incorporated your suggestions into the revised manuscript.\\nYour feedback has been invaluable in enhancing the quality and clarity of our work. We eagerly look forward to contributing more to the ICLR community.\"}", "{\"summary\": \"The paper proposes and tests the usage of transformers in the field of enumerative geometry, specifically regarding topological recursions and $\\\\psi$-class intersection numbers. To accomplish this, the paper proposes a new class of activation functions called Dynamic Range Activators (DRAs), and presents evidence of their performance in predicting a simple recursive function as part of a fully connected neural network, and then their ability to predict $\\\\psi$-class intersection numbers as part of their DynamicFormer architecture. The paper then attempts to investigate the trained DynamicFormer to see if it can predict other concepts in enumerative geometry, including the Dilation equation that stems from Virasoro constraints, as well as the asymptotic behavior of $\\\\psi$-class intersection numbers using abductive reasoning, verified using counter-factual intervention.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The new DRA functions, motivated by the evidence presented in the paper, are a significant contribution that may interest machine learning scientists.\", \"Training a DynamicFormer to predict $\\\\psi$-class intersection numbers, which then allows one to investigate a system's deeper geometry, is a significant, novel contribution that will interest mathematicians investigating enumerative geometry.\", \"The use of Conformal Prediction to estimate uncertainty provides a concrete measure of confidence in the experimental results, contributing to the paper's soundness.\", \"The figures are clear and high-quality, with informative captions.\", \"The writing is clear and mostly organized, including the mathematical background, methodology, and results.\"], \"weaknesses\": [\"**Notice: These weaknesses have been adressed during the discussion phase, and apply only to the initial version of the manuscript. However, these will remain unedited for posterity.**\", \"### Section 2\", \"The equations in this section use $\\\\hbar$ without defining it in the text. It may be worth explicitly calling it the reduced Planck constant in the text.\", \"The last paragraph mentions excluding the tensor $C$ due to a decreased impact on the computed $\\\\psi$-class intersection numbers, observed during experimentation. Appendix C justifies this exclusion, yet is not referenced in the text, making for seemingly unsound reasoning for excluding $C$. The authors should consider referencing appendix C here to further justify the exclusion of $C$, and expanding on this point within appendix C proper.\", \"### Section 3\", \"In the first two paragraphs, the paper presents the DynamicFormer for the first time and references a figure placed within an unrelated appendix, resulting in a disjointed reading experience. The authors may consider moving some parts of section 3 (such as its first two paragraphs) into a new appendix showcasing the DynamicFormer in detail and including the figure close by.\", \"In the same paragraphs, the authors use the initials COO without previously defining them. These initials seem to appear nowhere else in the main text, and only in appendix B are they defined as Coordinate List. Besides hurting the paper's readability, this seems to be an implementation detail that does not need to appear in the main text.\", \"The last paragraph mentions the [DYN] registry tokens, but fails to reference appendix B1. It may be appropriate to reference it here.\", \"### Section 5\", \"Equation 5.4 is presented without proof, with the authors claiming they used an approach described in Eynard et al. (2023). A sketch of the proof (perhaps in an appendix) will contribute to the work's soundness.\", \"Figure 3, and the relevant experiment, are based on the assumption that $A$ is rational. The authors should consider justifying the choice of testing only rational values of $A$, perhaps by connecting it back to equation 5.3, as proven by Aggarwal (2021).\", \"Figure 3 presents a significantly higher value of $R^2$ for $A=2/3$ compared to the values for $A=4/6$ and $A=6/9$, despite being identical numbers. This issue does not appear for other such sets of identical rational numbers, such as $A=3/4$ and $A=6/8$. Since the rest of the subsection on Abductive Reasoning relies on $A=2/3$ being the correct answer, **this error calls the entire subsection into question and significantly hurts the paper's soundness and overall rating**. The authors must justify how the $R^2$ of $A=2/3$ is different from the other two values, or replace the figure (and perhaps rewrite some of the supporting text). Based on the other values of the figure, it should be expected to see a maximal $R^2$ around $A=2/3$, but without such a significant jump.\", \"### Typos\", \"Section 5.1 line 319: \\\"The topological recursion formula equation 2.4 [...]\\\". Consider removing either \\\"formula\\\" or \\\"equation\\\", or placing all of \\\"equation 2.4\\\" in parentheses.\", \"Section 5.1.1. has multiple citations included in sentences with their parentheses. The ICLR 2025 formatting instructions (section 4.1) require such references to not have parentheses except around the year. The references in question appear in lines 371, 378, and 381.\", \"Section 5.1.1 line 417: \\\"As a result, We find an evidence [...]\\\". \\\"We\\\" does not need to be capitalized, and \\\"an\\\" should be removed.\", \"Appendix C line 950: \\\"Figure 6 shows (s) numerical [...]\\\".\", \"The title of appendix D and the caption of figure 7 both mistakenly write Princip**le** Component Analysis instead of Princip**al** Component Analysis.\"], \"questions\": [\"**Notice: These questions have been answered during the discussion phase, and remain unedited for posterity.**\", \"What is the significance of $\\\\hbar$ in the quantum Airy structure? How is it relevant specifically to training the DynamicFormer?\", \"Figure 3 may be a discrete sampling of an underlying (continuous?) map that gives an $R^2$ for each $A$, with a maximum at $A=2/3$. Can the authors characterize this map?\", \"Figure 4 shows a significantly weaker causal impact of $B$ on the number of intersection points, compared to $n$ and $d$. Though the authors call this unexpected in section 5's last paragraph, is there any explanation regarding the weak causal impact of $B$?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response part 2 to Reviewer daJv\", \"comment\": \"A5. **W5:** We acknowledge that a deeper theoretical analysis of DRA would enhance the understanding and impact of our work. However, our research covers a wide range of topics:\\n\\n- **Capturing Recursive Functions:** We introduce enhancements to Transformer architectures that address the challenge of modeling recursive functions with high-variance factorial growth, which is a significant step beyond periodic functions and simple toy problems in AI for mathematics.\\n\\n- **Explainability Methods:** We provide explainability techniques in non-trivial domains beyond standard language tasks, offering insights into how the model processes complex mathematical structures which contributes to the (re)discovery of profound mathematical concepts that have taken over 30 years of research to develop, bridging the gap between AI and advanced mathematics.\\n\\n- **Advocating Uncertainty Quantification:** We also emphasize the importance of uncertainty quantification in the AI for Mathematics community, highlighting its necessity for reliable and interpretable models.\\n\\nGiven the breadth and depth of these contributions, we focused the manuscript on presenting our findings and their implications within these areas. A comprehensive functional analytical discussion of DRA is indeed valuable but would require extensive elaboration that could detract from the main focus of our current work. We view the theoretical exploration of DRA as a promising avenue for future research.\\n\\nA6. **W6, Q3:**: The code and data is accessible publicly and anonymously via https://anonymous.4open.science/r/DynamicFormer-977D/.\\n\\nA7. **Q1:** This is a great question. We have tried to address in A1.Q1.\\n\\nA8. **Q1:** We have discussed this question at A4.W4.\\n\\nIf our comments resolve your concerns we would appreciate if you would consider raising your score!\"}", "{\"comment\": \"Dear Reviewers,\\n\\nThank you for your thoughtful feedback and for raising concerns regarding the suitability of our work for ICLR. Given that the primary area of this paper is applications to physical sciences (physics, chemistry, biology, etc.), we respectfully disagree and believe that our study offers several key contributions to both pure ML and AI for Science communities, extending beyond typical toy mathematical datasets such as arithmetic problems:\\n\\n1. **Development of DynamicFormer:** We introduce DynamicFormer, a novel multi-modal Transformer-based model enhanced with the Dynamic Range Activator (DRA). This architecture is specifically designed to handle the complex, high-variance recursive data with heteroscedasticity inherent in enumerative geometry. By augmenting Transformer architectures, we demonstrate their capability to accurately model recursive functions exhibiting factorial growth. This addresses a long-standing challenge of modeling recursive function, representing a significant advancement beyond existing models, which often struggle with such intricate structures. **To the best of our knowledge, we are the first to explore and successfully model recursive with factorial growth functions.**\\n\\n2. **Explainability and Uncertainty Quantification:** We have implemented a robust conformal uncertainty estimation and interpretability analyses into our model, ensuring that predictions are not only accurate but also reliable and interpretable. This is particularly crucial in AI for Mathematics contexts, where precision and rigor are paramount. To the best of our knowledge, **no prior work in AI for Mathematics has attempted to integrate uncertainty quantification and explainability methods specifically tailored for research-level mathematical problems.** \\n\\n3. **Discovery of Mathematical Insights:** Our analysis revealed that the network autonomously learned very deep identities and constraints in a pure data-driven manner, providing insights into the model\\u2019s mechanisms for out-of-distribution (OOD) generalization. Additionally, by investigating the asymptotic behavior of the intersection numbers, our interpretability analyses offered valuable insights into the underlying mathematical structures and network's \\\"World Model\\\". This is especially beneficial for theorem building, where the network's guidance or hints can suggest the existence of relations or identities. Furthermore, our approach facilitates the conjecturing of properties related to the asymptotic formulas of other intersection numbers. Collectively, these findings pave the way for data-driven human-machine collaboration in mathematical discovery.\\n\\nAlthough our study investigates concepts from enumerative geometry, we have made concerted efforts to present the material in a manner accessible to the ML community. We provide the necessary background and context to ensure that the paper is understandable without requiring specialized knowledge in enumerative geometry, as evidenced by Reviewer ncfx's positive assessment. **Our objective is indeed to bridge the gap between these fields, demonstrating how advanced ML techniques can tackle complex mathematical problems and, conversely, how these problems can inspire new developments in AI.** We also have to note the emerging body of work at major ML conferences that intersects with pure mathematics. For instance, the recent paper **\\\"Machine Learning Detects Terminal Singularities\\\" presented at NeurIPS 2023** applies ML methods to problems in algebraic geometry. This exemplifies an interest and recognition within the machine learning community for such interdisciplinary research, supporting the suitability of our work for ICLR.\\n\\nWe hope that these clarifications and enhancements address your concerns and demonstrate the meaningful impact and innovative nature of our research. Thank you again for your comments, which have been invaluable in improving the quality and clarity of our paper. If our arguments resolve your concerns we would appreciate if you would consider raising your score!\\n\\nBest Regards,\\n\\nThe Authors\"}", "{\"title\": \"Response part 1 to Reviewer qDDj\", \"comment\": \"Thank you for your detailed and constructive feedback. In the updated manuscript, we have improved our writing and address all your concerns. All changes we discuss below are in the new revision of the manuscript. Where possible, we have highlighted changes in the revised document.\\n\\n**C1** I recommend to reject the paper mainly because I believe ICLR is not a suited venue, both for referring this paper (this paper needs to be reviewed by at least one expert in enumerative geometry, I don't know if there are such reviewers at ICLR) and for disseminating it (a journal in the field of computational enumerative geometry may be more suited). Furthermore, in my opinion the presentation of the material can be improved in several aspects before publication.\\n\\n**A1** Thank you for your thoughtful feedback. We believe that our paper is indeed well-suited for ICLR, as it lies at the intersection of ML and Mathematics\\u2014an area of growing interest within the AI for Science community (discussed in Related Work section). Our primary focus is on developing novel ML methodologies, specifically enhancing Transformer architectures to handle complex, high-variance recursive data inherent in enumerative geometry. **To best of our knowledge, there has been no prior work attempting to model functions with such recursive behavior and factorial growth.** This work contributes to the broader field of ML by addressing challenges in modeling recursive functions with factorial growth, which has implications beyond the specific mathematical domain we explore. \\n\\nWhile our study involves concepts from enumerative geometry, we have strived to present the material in a way that is accessible to the ML community. We have provided the necessary background and context to make the paper understandable without requiring specialized knowledge in enumerative geometry. Our goal is to bridge the gap between these fields, demonstrating how advanced ML techniques can tackle complex mathematical problems and, in turn, how these problems can inspire new developments in AI. We believe that ICLR provides an excellent venue for such interdisciplinary work, encouraging collaborations and discussions that can drive innovations in AI.\\nWe also note that there is a emerging body of work published at major ML conferences. For example, the recent paper \\\"Machine learning detects terminal singularities\\\" presented at NeurIPS 2023 apply ML methods to problems in algebraic geometry. This indicates an interest and recognition within the machine learning community for such research.\\n\\nRegarding the presentation of the material, we appreciate your feedback and have made significant improvements to the manuscript. We have revised several sections to enhance clarity and readability, ensuring that our contributions are communicated effectively.\\n\\n**C2**: To which extent incorporating the conformal prediction framework in your analysis necessary? I am afraid this adds an additional layer of complexity that further hinders the communication of your findings. Maybe this discussion should be deferred to the appendix, keeping only what is strictly necessary to understand the main conclusion of your experiments in the main paper.\\n\\n**A2** Thank you for pointing this out! Upon your suggestion, we have moved this part to Appendix E. In response, We would like to emphasize that one of the key messages of our work is that simply predicting or classifying mathematical computables using ML is not sufficient. Predictions without proper uncertainty estimation are not reliable, especially in mathematics, where precision and rigor are paramount. Therefore, we aimed to enhance the reliability and credibility of our findings by incorporating conformal uncertainty estimation into our analysis. Moreover, given the high heteroscedasticity and high variance present in our data, standard conformal prediction procedures were inadequate for our purposes. We had to adapt specific techniques to achieve acceptable coverage levels and ensure the robustness of our results. We believe that this aspect is crucial for knowledge discovery and contributes to the advancement of reliable AI for Science applications.\"}", "{\"comment\": \"We understand your concern. However we believe that our study makes several key contributions at the intersection of machine learning and Mathematics that is beyond working on typical toy math datasets ( e.g arithmetics) and simple language tasks :\\n\\n1. **Development of DynamicFormer:** We introduced DynamicFormer, a multi-modal Transformer-based model and Dynamic Range Activator (DRA) specifically designed to handle the complex, high-variance recursive data with heteroscedasticity inherent in enumerative geometry. By enhancing Transformer architectures, we demonstrated the capability to accurately model recursive functions with factorial growth. **This addresses a long-standing challenge in both machine learning and mathematical modeling** and represents an important step beyond existing models, which struggle with such intricate structures.\\n\\n3. **Explainability and Uncertainty Quantification:** We adapted proper conformal uncertainty estimation and interpretability analyses, ensuring that our model's predictions are not only accurate but also reliable and interpretable. This is particularly important and challenging in AI for Mathematics contexts where precision and rigor are paramount. To best of our knowledge, there has been no prior work attempting to incorporate uncertainty quantification and explainability methods for AI for research level-mathematics problems.\\n\\n4. **Discovery of Mathematical Insights:** Our analysis revealed that the network autonomously learned Virasoro constraints, shedding light on how the model is actually performing the OOD generalization. Additionally, by investigating the asymptotic behavior of $\\\\psi$-class intersection numbers, our interpretability analyses provided valuable insights into the underlying mathematical structures. This is particularly beneficial in the context of theorem building, where guidance or hints from the network can suggest the existence of relations or identities. Our approach can facilitate the conjecturing of properties related to the asymptotic formulas of other intersection numbers. Collectively, these findings pave the way for data-driven human-machine collaboration in mathematical discovery.\\n\\n\\nWe hope that these enhancements address your concerns and demonstrate the meaningful impact and innovative nature of our research. Thank you again for your comments, which have been invaluable in improving the quality and clarity of our paper\"}", "{\"comment\": [\"Having viewed the updated manuscript, and the authors' response, it is clear that the aforementioned weaknesses, and more, have been addressed. Now a few minor concerns remain:\", \"Another reviewer requested to view the authors' code for the paper. Per the authors' judgement, referencing this code within the manuscript itself (for instance via footnote) may increase the work's accessibility.\", \"The use of highlighting to easily distinguish major changes is useful for the review process, but there is a concern that these highlights will remain in the final draft. Such highlighting has no place in a finished work, and the authors should remember to remove the highlighting later.\", \"Some typos seem to have snuck into the updated text:\", \"Abstract, line 20: \\\"To [the] best of our knowledge...\\\".\", \"Section 5.1.1, line 407: \\\"It is expected geometrically that $A$ would [be] a period...\\\".\", \"Section 5.1.1, line 485: \\\"..., there is [an] evidence that...\\\".\", \"However, it is clear that the authors are serious in responding to feedback, and given the minimal severity of the remaining concerns, there is now little reason to reject this work. This paper is solid and interesting, and no doubt will be of interest to the greater ICLR community. The authors may be pleased to see an updated review with increased scores. Note the weaknesses and questions that remain unedited, for posterity.\", \"I wish the authors the best of luck in their future endeavors.\"]}", "{\"title\": \"Response to Reviewer ncfx\", \"comment\": \"Thank you for your detailed and constructive feedback. We have improved our writing and updated the manuscript to address all your concerns. All changes we discuss below are in the new revision of the manuscript. Where possible, we have highlighted changes in the revised document.\\n\\nA1. **Sec2, W1:** $\\\\hbar$ is a formal parameter that keeps track of the genus. From the physics point of view, it should be rather called the *string coupling constant*, often denoted as $g_{s}$. But $g_s^{2g-2+n}$ is rather ugly, thus often people use $\\\\hbar$ as a substitution for a small bookkeeping parameter. We have clarified this in the main text.\\n\\nA2. **Sec2, W2:** Thank you for pointing this out, We now reference and discuss this observation and its theoretical basis in Appendix B. Specifically, the $C$-terms contribute quadratically, whereas the $B$-term contributes linearly and thus has a stronger effect on computing $\\\\psi$-class intersections.\\n\\nA3. **Sec3, W1, W2, W3:** Great observation. We moved the bulk of the text regarding the description of the model at the beginning of Section 3 and its tail to Appendix C, where the illustration exists to improve the paper's readability and coherence. Now, Section 3 only discusses the motivations and methodology related to the DRA activation function.\\n\\nA4. **Sec5, W1:** Thank you for your insightful feedback. In response, we have updated the manuscript to include a brief description of the proof strategy for Equation (5.4). Our approach is based on a resurgent analysis of the $n$-point functions of psi-class intersection numbers, which are computed via determinantal formulas. These formulas emerge from the integrability properties of the intersection numbers, specifically the Korteweg\\u2013de Vries (KdV) hierarchy involving the Airy function. We acknowledge that providing a detailed proof within the manuscript poses significant challenges due to the complexity and depth of the required mathematical framework. The progression from Kontsevich's proof of the Virasoro constraints to Aggarwal's proof of the asymptotic formula spanned nearly 30 years, underscoring the substantial mathematical developments involved. Currently, there are three independent proofs of Equation (5.4):\\n- **Aggarwal's Proof**: Spanning 76 pages.\\n- **Guo and Yang's Proof**: A 35-page proof.\\n- **Eynard's Proof**: Consisting of 26 pages.\\n\\nIncluding even a sketch of these proofs would necessitate introducing extensive mathematical background and sophisticated techniques, which could disrupt the coherence and focus of the paper. Therefore, we have opted to provide a concise overview of the proof strategy in the manuscript.\\n\\nA5. **Sec5, W2:** Thank you for bringing this to our attention. We have updated the manuscript to include a brief discussion on this point. In particular, this constant is expected to be a period on the associated spectral curve. For this specific enumerative problem, these periods are rational numbers associated with the WKB method applied to the Airy differential equation.\\n\\nA6. **Sec5, W3:** We sincerely apologize for the oversight and any confusion it may have caused. You are absolutely correct; this was an oversight on our part. The discrepancy in Figure 3 regarding the $R^2$ values for A for equivalent fractions\\u2014was due to not fixing the random seed during probe training and data loading. This led to inconsistent results for equivalent values of A's. We have updated Figure 3 and the accompanying text to reflect these corrections. Your attention to detail has been invaluable in improving our paper. Thank you for your constructive feedback.\\n\\nA7. **Q1:** Adressed at A1. It is not relevant specifically in training the DynamicFormer. \\n\\nA8. **Q2:** In Fig.3, we are trying to showcase the performance of the linear and non-linear probe in recovering the true value of the constant $A \\\\in \\\\mathbb{Q}$ within range of possible values as a grid search. So, we train linear/non-linear probes to evaluate how well the Transformer's hidden representations encode our $A$. This approach not only allows us to see how such a fundamental \\\"conserved quantity\\\" is internalised by the network, but also through abductive hypothesis testing, we could recover the actual value of $A$ from a conjectural version of the asymptotic formula. We have updated the manuscript with Arthur clarification. \\n\\nA9. **Q3:** Great question. After careful reevaluation of our statement, we think that it is actually expected. The main reason is that the B-term contributes linearly, shown in equation 2.4, while the dependence in $n$ and $d$ are factorial. So they plays a much bigger role in the computation. This has been observed and proved Aggarwal's paper as well. Thank you for pointing this out.\\n\\nIf our comments resolve your concerns we would appreciate if you would consider raising your score!\"}", "{\"summary\": \"Unfortunately, I have no expertise at all in computational enumerative geometry. My review will thus be quite superficial.\\n\\n*Summary*\", \"this_paper_proposes_to_use_transfomer_models_to_tackle_what_i_understood_is_a_central_problem_in_enumerative_geometry\": \"computing the phi-class intersection numbers on the moduli space of curves. From my pretty crude rudimentary and pragmatical ML perspective, the authors reduce this problem to learning a multi modal function mapping input tuples of the form (quantum Airy structure datum [a tensor / sequence of tensors], genus [integer], number of marked points [integer], partitions [permutation-invariant set]) to output intersection numbers [sequence of integers (?)]. The model is trained on solutions computed using brute-force methods up to some genus, and evaluated on its ability to extrapolate to find solutions for higher genus (geni?).\\n\\nThe main technical contribution of the papers are methodological and consist in \\n\\n(i) designing a specific multi-modal transformer architecture suited to the problem at hand (combining mostly existing models / techniques)\\n(ii) introducing a novel activation function specifically suited to model recursive functions, which are crucial to solve the problem. \\n\\nExperiments on synthetic data are provided demonstrating that the model seems to be able to extrapolate to higher geni than the ones seen in the training data. The authors also provide some more qualitative analysis to investigate to which extent the internal representations learned by the model encode mathematical structures that are known to be relevant to solve the problem.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"(S1) Investigating to which extent the recent successes of transformer models can transfer to other tasks, such as the one of solving fundamental problems in mathematics, is worthwhile and relevant.\", \"weaknesses\": \"(W1) The relevance and technical aspects cannot be well understood / evaluated unless the reader has some non-trivial background knowledge of enumerative geometry.\\n\\n(W2) The writing and exposition of the material can be improved.\", \"questions\": [\"*Recommendation*\", \"I recommend to reject the paper mainly because I believe ICLR is not a suited venue, both for referring this paper (this paper needs to be reviewed by at least one expert in enumerative geometry, I don't know if there are such reviewers at ICLR) and for disseminating it (a journal in the field of computational enumerative geometry may be more suited). Furthermore, in my opinion the presentation of the material can be improved in several aspects before publication.\", \"*Comments and questions*\", \"To which extent incorporating the conformal prediction framework in your analysis necessary? I am afraid this adds an additional layer of complexity that further hinders the communication of your findings. Maybe this discussion should be deferred to the appendix, keeping only what is strictly necessary to understand the main conclusion of your experiments in the main paper.\", \"I don't understand the paragraph on top of p. 7, and I don't think this is due to my lack of expertise in enumerative geometry. In particular, what does \\\"the neural network embedding p_g,n ... is a vector space\\\" means ? How can a function be a vector space? What does \\\"go to the inner product space\\\" means? These are (to me) very loose nonsensical mathematical statements.\", \"*Minor comments & typos*\", \"p.3 the acronym COO has not been introduced\", \"Figure 5 should be included in the main part of the paper. In general, avoid forward references to far away, especially in the appendix without mentioning that it is in the appendix.\", \"Use capitalization when reference tables, figures, sections, equations, etc. in the text (no capitalization needed when referring to figures or tables in general). E.g. Figure lines 168, 197, 334, Section lines 204, Table lines 274,298, Equation lines 319, 389 ... ...\", \"line 196 we -> We\", \"line 421: the sentence \\\"The interesting thing is that this is the performance of the non-linear probe.\\\" could be rephrased to better suit a formal publication.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"**Summary:**\\nThis paper introduces a Transformer-based model, DynamicFormer, with a novel Dynamic Range Activator (DRA) activation function tailored to model recursive functions with high variance and factorial growth. The study applies this methodology to computational enumerative geometry, specifically in computing \\\\(\\\\psi\\\\)-class intersection numbers. Additionally, the paper incorporates conformal prediction for uncertainty quantification and provides interpretability analyses that uncover evidence of the model learning mathematical structures, such as the Virasoro constraints.\\n\\n**Strengths:** \\n1. **Novel Methodology:** \\n - The DRA activation function addresses challenges in modeling recursive functions with factorial growth, a significant contribution beyond standard applications of transformers. \\n - Incorporates uncertainty quantification in a rigorous and tailored manner for high-variance recursive data. \\n\\n2. **Interdisciplinary Contribution:** \\n - Bridges machine learning and enumerative geometry, showcasing how AI can assist in tackling complex mathematical problems. \\n - Reveals new insights into mathematical structures through model interpretability, contributing to the AI for Science community. \\n\\n3. **Solid Experimental Design:** \\n - Comprehensive evaluation of DRA against other activation functions on synthetic data. \\n - Demonstrates the ability of DynamicFormer to generalize to out-of-distribution cases. \\n - Insightful analysis of the network's internal representations and their alignment with known mathematical properties. \\n\\n**Weaknesses:** \\n1. **Complexity and Accessibility:** \\n - The paper's interdisciplinary nature makes it challenging for readers without expertise in enumerative geometry or advanced mathematics to fully grasp the significance of the contributions. \\n - The initial presentation of material was fragmented, though significantly improved during the discussion phase. \\n\\n2. **Limited Exploration of Applications:** \\n - While the paper demonstrates the feasibility of using transformers for \\\\(\\\\psi\\\\)-class intersection numbers, broader applications and deeper theoretical insights into the DRA activation function could strengthen the work further. \\n\\n3. **Venue Suitability Concerns:** \\n - Some reviewers questioned whether ICLR is the most appropriate venue for this work, given its mathematical focus. However, the paper aligns with the growing interest in interdisciplinary research at ML conferences. \\n\\n**Discussion:** \\nThe reviewers expressed mixed opinions. One reviewer rated the paper highly, highlighting its interdisciplinary novelty and methodological contributions. Other reviewers raised concerns about the presentation and accessibility but acknowledged the potential of the work. The authors addressed these concerns through substantial revisions, clarifications, and added context, improving the paper significantly. Despite its complexity, the paper offers novel contributions to both the ML and mathematical communities.\\n\\n**Suggestions for Camera-Ready Submission:** \\n1. Continue improving the clarity and accessibility of the manuscript, particularly for readers with limited background in enumerative geometry. \\n2. Highlight broader implications and potential applications of the DRA activation function in modeling recursive functions beyond the specific mathematical context explored. \\n3. Ensure all implementation details and code are easily accessible to facilitate reproducibility. \\n\\n**Conclusion:** \\nThis paper represents an original and significant contribution to the intersection of AI and advanced mathematics. While the application is domain-specific, the methodologies and insights have broader relevance to the AI for Science community. The constructive feedback from reviewers has been well-addressed, and I recommend the revised manuscript for acceptance.\", \"additional_comments_on_reviewer_discussion\": \"See above.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"Response part 2 to Reviewer qDDj\", \"comment\": \"**C3**: I don't understand the paragraph on top of p. 7, and I don't think this is due to my lack of expertise in enumerative geometry. In particular, what does \\\"the neural network embedding p_g,n ... is a vector space\\\" means ? How can a function be a vector space? What does \\\"go to the inner product space\\\" means? These are (to me) very loose nonsensical mathematical statements.\\n\\n**A3** Thank you for pointing out these issues. We apologize for the confusion caused by the inaccurate mathematical statements in that paragraph. In the revised manuscript, we have carefully corrected our writing to clarify these points and ensure mathematical precision.\\n\\n**A4** for Minor comments & typos: Thank you for your detailed suggestions. We have carefully taken into account your comments and corrected the typos. Only thing that we could not do was to move Fig. 5 to the main text. Upon the suggestion by Reviewer ncfx, we moved the implementation details of the model to Appendix C where Figure 5 is the most relevant.\\n\\nIf our comments resolve your concerns we would appreciate if you would consider raising your score!\"}", "{\"title\": \"Response part 1 to Reviewer Yn4i\", \"comment\": \"Thank you for your thoughtful and helpful comments. All discussions below are in the revised manuscript. Where possible, we have highlighted changes in the revision .\\n\\n**C1**: What do we need to know about $\\\\psi$-intersection numbers? most of the sophisticated formulae are not really used in a meaningful way\\n\\n**A1** Thank you for pointing this out! We tried to address this question in Appendix A. Unfortunately due to the page limit, we could not have this discussion in the main text. Briefly, $\\\\psi$-intersection numbers are central to understanding the geometry of moduli spaces of curves and have rich interconnections with various mathematical and physical theories. For instance, they appear in the context of topological recursion and mirror symmetry, providing insights into the enumerative geometry of Calabi-Yau manifolds. In theoretical physics, especially in models of 2D quantum gravity and string theory, these numbers encode the coupling constants and interaction terms. In Matrix models they appear as the connecting points between random matrices and intersection theory.\\n\\n**C2**: I am not sure the work is very interesting from a geometric point of view (the interesting thing is to gain theoretical insight into what psi-intersection numbers, not get somehow numerically accurate estimates of them).\\n\\n**A2** Thank you for your constructive comment and for raising this important point. This is a great question. Firstly, while gaining theoretical insights into $\\\\psi$-class intersection numbers is indeed crucial, we believe that developing methods to accurately compute these numbers\\u2014especially at higher genera\\u2014is also of significant importance to the field of enumerative geometry. Our work demonstrates that it is possible to numerically model these complex intersection numbers effectively, which is a non-trivial task due to their intricate recursive and combinatorial structures. Although this study does not explore computations up to arbitrarily high genus, it provides evidence that such a challenging task is feasible using our approach. In the Methods sections Appendix C of the manuscript, we elaborate on why computing these intersection numbers is extremely difficult and how our methods contribute to overcoming these challenges.\\n\\nMoreover, this work serves as a foundational step in a broader research program we are undertaking. Working with $\\\\psi$-class intersection numbers has been a testing ground for the methods we developed. Over the past 30 years, significant insights have been gained about these intersection numbers, which is not the case for many other enumerative problems with similar structures, such as those arising in Gromov\\u2013Witten theory. **Directly venturing into unexplored territories without a solid testing ground would be imprudent. By establishing a reliable methodology with $\\\\psi$-class intersections, we have set a solid path to explore more complex enumerative problems.** For instance, the interpretability analysis we performed offers valuable insights into the underlying mathematical structures. In the process of theorem building, having guidance or hints can be immensely beneficial. If the neural network suggests the existence of a relation or identity, it may not constitute a formal proof, but\\u2014as mathematicians often experience\\u2014knowing the potential outcome is a significant step towards proving it rigorously. Also, for other types of intersection numbers, our approach can help in conjecturing properties of their asymptotic formulas. This can lead to new developments of enumerative geometry. Therefore, our work not only contributes computational tools but also serves as a foundational and motivational step for mathematicians interested in this field. In the conclusion section, we tried to summarise this.\\n\\nWe hope this clarifies the contributions of our work to enumerative geometry and addresses your concerns.\"}", "{\"title\": \"Summary of Rebuttal and Discussion Period\", \"comment\": \"Dear Reviewers, AC and SAC,\\n\\nWe would like to express our gratitude to the reviewers and ACs for their time, effort and valuable feedback on our work. We have carefully considered all comments and have made significant revisions to the manuscript to address the concerns raised during the review and discussion phases.\\n\\n**Our Main Contributions:**\\n\\n1. **Development of DRA:**\\n - We introduced the **Dynamic Range Activator (DRA)**, a new non-linear activation function, specifically designed to handle complex, high-variance recursive data with factorial growth. We address the long-standing challenge of modeling recursive functions with high variance, representing a significant advancement over existing models that struggle with such intricate structures. **To the best of our knowledge, we are the first to explore and successfully model these recursive functions.**\\n\\n2. **Uncertainty Quantification:**\\n - We incorporated and adapted a proper conformal uncertainty estimation method ensuring that our model's predictions are not only accurate but also reliable, which is important in mathematics where precision and rigor are paramount. **Our work is the first to apply uncertainty quantification in AI-based computational mathematics.**\\n\\n3. **Discovery of Mathematical Insights:**\\n - Our analysis revealed that the network autonomously learned **Virasoro constraints**, providing insights into how the model generalizes to out-of-distribution data.\\n - By investigating the asymptotic behavior of $\\\\psi$-class intersection numbers, our interpretability analyses offered valuable insights into underlying mathematical structures.\\n - **This finding opens new avenues for data-driven human-machine collaboration in closed-form expression discovery, theorem formulation and conjecturing properties related to asymptotic formulas of other enumerative problems.**\\n\\n\\n**Responses to Reviewers:**\\n\\n- **Reviewer ncfx:**\\n - We really appreciate their detailed and constructive feedback.\\n - After addressing all concerns, the reviewer acknowledged our efforts, stating: \\\"This paper is solid and interesting, and no doubt will be of interest to the greater ICLR community.\\\"\\n - The reviewer increased their score accordingly, reflecting their positive assessment.\\n\\n- **Reviewer daJv:**\\n - We thoroughly addressed the concerns raised, including expanding on the motivations for using Transformers, enhancing the Related \\n Work section, code provision, and improving the presentation of our findings.\\n - Despite our detailed responses and acknowledging the novelty and strengths of our work, Reviewer daJv dismissed our responses \\n entirely and unexpectedly raised concerns about the paper's suitability for ICLR, even though similar works have been accepted at \\n previous ICLR and other major AI conferences. (provided in our last message to all reviewers and ACs.)\\n\\n- **Reviewer Yn4i:**\\n - They engaged minimally with our contributions. Despite our detailed clarifications on our cotribution both from ML and Computational \\n Algebraic Geometry perspective , their feedback remained vague and uninformative, ultimately expressing reservations about the \\n paper's excitement.\\n\\n- **Reviewer qDDj**\\n - We improved the manuscript based on their feedback, including correcting errors and elaborating more on key concepts, providing examples on similar works that have been accepted at previous ICLR and other major AI conferences. However, the \\n reviewer did not update their evaluations or engage further in the discussion.\\n\\n\\n**Final Remarks:**\\n\\nWe believe that our work offers important contributions to the field of AI for Science. This work bridges the gap between ML and advanced mathematics, aligning with ICLR's interest in innovative and interdisciplinary research.\\n\\nWe respectfully request that the area chairs and senior area chairs consider our comprehensive responses and the positive evaluation from Reviewer ncfx when making their decision. We are confident that our work will be of interest and value to the ICLR community.\\n\\nThank you for your time and consideration. We remain available to provide any additional information or clarification as needed.\\n\\nBest regards,\\n\\nThe Authors\"}", "{\"comment\": \"Thanks for the comments! I appreciate the additional information provided by the authors, though I still think the present version of this work fails to deliver something exciting at least I see how some future developments could be interesting!\"}" ] }
4WvCoXU2dF
SymMaP: Improving Computational Efficiency in Linear Solvers through Symbolic Preconditioning
[ "Hong Wang", "Jie Wang", "Minghao Ma", "Haoran Shao", "Haoyang Liu" ]
Matrix preconditioning is a crucial modern technique for accelerating the solving of linear systems. Its effectiveness heavily depends on the choice of preconditioning parameters. Traditional methods often depend on domain expertise to define a set of fixed constants for specific scenarios. However, the characteristics of each problem instance also affect the selection of optimal parameters, while fixed constants do not account for specific instance characteristics and may lead to performance loss. In this paper, we propose **Sym**bolic **Ma**trix **P**reconditioning (**SymMaP**), a novel framework based on Recurrent Neural Networks (RNNs) for automatically generating symbolic expressions to compute efficient preconditioning parameters. Our method begins with a grid search to identify optimal parameters according to task-specific performance metrics. SymMaP then performs a risk-seeking search over the high-dimensional discrete space of symbolic expressions, using the best-found expression as the evaluation criterion. The resulting symbolic expressions are seamlessly integrated into modern linear system solvers to improve computational efficiency. Experimental results demonstrate that SymMaP consistently outperforms traditional algorithms across various benchmarks. The learned symbolic expressions can be easily embedded into existing specialized solvers with negligible computational overhead. Furthermore, the high interpretability of these concise mathematical expressions facilitates deeper understanding and further optimization of matrix preconditioning strategies.
[ "Matrix Preconditioning", "Symbolic Learning", "Linear System Solver" ]
https://openreview.net/pdf?id=4WvCoXU2dF
https://openreview.net/forum?id=4WvCoXU2dF
ICLR.cc/2025/Conference
2025
{ "note_id": [ "shCp2Dq530", "gOHT8iaO79", "IxXoAE72gj", "0Sa6HOTyik" ], "note_type": [ "official_review", "official_review", "official_review", "comment" ], "note_created": [ 1730431163215, 1731170670050, 1730685370881, 1731847855543 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission8948/Reviewer_57eX" ], [ "ICLR.cc/2025/Conference/Submission8948/Reviewer_X8gT" ], [ "ICLR.cc/2025/Conference/Submission8948/Reviewer_83ou" ], [ "ICLR.cc/2025/Conference/Submission8948/Authors" ] ], "structured_content_str": [ "{\"summary\": \"This paper addresses the problem of matrix preconditioning, which is a crucial ingredient in the iterative solution of linear systems. In particular, the authors focus on the parameters of some preconditioners and propose a machine-learning approach to determining these parameters. They base their approach on parameterized PDEs, such that it will predict preconditioner parameters given PDE parameters. The authors construct a training dataset and use it to learn symbolic regression formulas for these parameters. They argue that symbolic regression is more efficient and interpretable than other regression techniques.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The paper addresses practical problems.\", \"The symbolic approach has the potential to reveal the relationship between PDE parameters and preconditioner parameters, leading to new mathematical discovery and analysis.\", \"The symbolic regression performance is competitive with neural network regression.\"], \"weaknesses\": [\"The empirical evaluation of the proposal is conducted on either too simplistic but less effective preconditioners (SOR and SSOR), or a widely used preconditioner (AMG) with only one single parameter. It would be more informative if the authors experimented with more AMG parameters (such as the choice of the smoother and other coarsening parameters) and conducted a sensitivity analysis.\", \"The generation of training data is costly: (# data points) * (# grid searches) * (cost of one preconditioned solve)\", \"The training of symbolic regression can also be costly because of the sample efficiency of reinforcement learning.\", \"It needs to be clarified of the relationship between genetic programming in section 2.2 and the rest of the paper.\"], \"questions\": [\"The authors use the condition number as the evaluation metric for the AMG experiments, which is hard to compute for large matrices. Why not use computation time instead, which is the most straightforward metric? Would the conclusion be different depending on which metric to use? Note that condition number does not necessarily correlate with time, since coarsening generally poses tradeoffs between preconditioning time and convergence speed.\", \"Continued from the above question: When constructing the training set for AMG, did the authors use time, iteration, or condition number to determine optimal parameters?\", \"How do the following three times compare: time of generating the training data, time of training the symbolic regression, and time of one preconditioned solve?\", \"How many data points are needed to train symbolic regression?\", \"For the interpretability analysis in 5.3, do the expressions come from SymMaP 1 in Tables 1 to 3? Are there trade-offs between the preconditioning performance of a symbolic formula and the interpretability?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The manuscript presents a novel approach to quasi-optimal preconditioner construction based on symbolic regression. The authors perform extensive numerical simulations to identify the optimal parameters for given linear systems through grid search. The pairs (a parameter for a linear system and the corresponding optimal preconditioner parameter) compose the training dataset. Then, the combination of RL trainer and RNN for symbolic regression fits the analytical expression for the optimal preconditioner parameter. Experimental results demonstrate that the presented pipeline gives such a preconditioner that, on average, the runtime for linear solvers is smaller for different PDE classes.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"Disclaimer: I am not an expert in symbolic regression, so advances in the study from this perspective could not be well-identified.\\n\\nThe manuscript's main strength is its attempt to apply the general symbolic regression technique to the preconditioner construction problem. The presented pipeline looks non-trivial, although the objective function is standard. In experiments, the presented approach generates preconditioners that establish faster convergence of the linear solver. In addition, the SyMMap could reconstruct the optimal expression for the $\\\\omega$ in SSOR for positive definite matrices.\", \"weaknesses\": \"The main weakness of this study is that it misses the crucial step of using the derived symbolic expression to generate the optimal preconditioner. I have carefully checked Figure 2 and do not find an explicit connection between the expression $\\\\tau$ and the compiled preconditioner in the library. Some remarks are given in Section 5.3; however, the presented expressions depend on unclear variables $x_1, x_2, x_3$, so how to use them in preconditioner construction is unclear.\\n\\nIn addition, I can list the following weaknesses:\\n1. The manuscript does not explicitly present the parametrizations of considered PDEs and how these parameters are passed as input to RNN. Also, the authors ignore details on how training and testing sets are prepared.\\n2. I did not find the name of the linear solver used to evaluate the preconditioners, e.g., CG, GMRES, or smth else.\\n3. The incomplete Cholesky/LU preconditioner is not included in the comparison, although it is among the most powerful.\\n4. The authors do not report the runtime for training the presented pipeline (although for Darcy, the runtime is given in Table 6). Moreover, they do not discuss how many linear systems are needed to solve with the generated preconditioner to pay off the training costs compared to classical approaches like SSOR with the optimal parameter or ILU. Tables 1 and 2 show a gain in runtime, which is good, but how much time does the training of symbolic regression require? \\n5. For unknown reasons, the authors include a comparison with MLP. However, a more interesting comparison is replacing RNN with Transformer architecture and analyzing the results in the performance of linear solver and training runtime. \\n6. No theoretical guarantees on the performance of such an approach or motivation of the presented pipeline are presented, so the robustness of this approach remains unclear.\", \"questions\": [\"Some questions are in the previous section on the weaknesses of the submission; other questions are given below.\", \"1. What are alternative approaches to constructing optimal preconditioner parameters? Please add a paragraph to place your work in the context of learning preconditioners from data or similar techniques. For example:\", \"https://proceedings.mlr.press/v202/li23e.html\", \"https://sc18.supercomputing.org/proceedings/workshops/workshop_files/ws_lasalss102s2-file1.pdf\", \"https://arxiv.org/abs/2405.15557\", \"https://arxiv.org/abs/2401.02016\", \"https://arxiv.org/abs/1806.06045\", \"2. What are the spectrum properties of the preconditioned matrix with the generated preconditioner? It would be interesting to observe whether they only reduce the condition number or additionally increase spectrum clustering. Condition numbers for the preconditioned matrices are presented in Tables 3 and 6, but only for limited types of PDEs.\", \"3. What was the $\\\\epsilon$ parameter used in experiments, and does it significantly affect the training runtime/performance?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The authors introduce Symbolic Matrix Preconditioning (SymMaP), a method that identifies symbolic expressions for efficient preconditioning parameters. These generated expressions can be seamlessly integrated into modern solvers with minimal computational overhead.\", \"soundness\": \"1\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": \"The proposed framework involves defining the optimal preconditioning parameters, subsequently searching for symbolic expressions, and integrating these expressions into the modern solver.\", \"weaknesses\": [\"Although the application of the proposed approach in this context may be novel, the reasoning behind why it could outperform a class of preconditioners based on graph neural networks (https://proceedings.mlr.press/v202/li23e.html, https://arxiv.org/abs/2405.15557) is not evident.\", \"The authors test their framework using three datasets (Biharmonic, Darcy Flow, and Elliptic PDE) with minimal variation in matrix size. It would be beneficial to validate their approach using the SuiteSparse Matrix Collection (https://sparse.tamu.edu/).\", \"I found some sections a bit challenging to follow. Could the authors consider reorganizing the paper or providing more detailed explanations for each step of SymMaP to enhance clarity?\", \"The values in the columns labeled \\\"SymMap 1\\\" and \\\"SymMap 2\\\" in tables 1, 2, 3 are not clear and would benefit from additional information to clarify the results.\", \"A comparison of the performance in a CPU environment using an MLP is not adequate to make a definitive assertion `symbolic expressions possess equivalent expressive capabilities to neural networks in this scenario, effectively approximating the optimal parameter expressions`.\"], \"questions\": \"See weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}" ] }
4WsHgA8EG1
BalancEdit: Dynamically Balancing the Generality-Locality Trade-off in Multi-modal Model Editing
[ "Dongliang Guo", "Mengxuan Hu", "Zihan Guan", "Thomas Hartvigsen", "Sheng Li" ]
Large multi-modal models inevitably decay over time as facts change and previously learned information becomes outdated. Traditional approaches such as fine-tuning are often impractical for updating these models due to their size and complexity. Instead, direct knowledge editing within the models presents a more viable solution. Current model editing techniques, however, typically overlook the unique influence ranges of different facts, leading to compromised model performance in terms of both generality and locality. To address this issue, we introduce the concept of the generality-locality trade-off in multi-modal model editing. We develop a new model editing dataset named OKEDIT, specifically designed to effectively evaluate this trade-off. Building on this foundation, we propose \textbf{BalancEdit}, a novel method for balanced model editing that dynamically achieves an optimal balance between generality and locality. BalancEdit utilizes a unique mechanism that generates both positive and negative samples for each fact to accurately determine its influence scope and incorporates these insights into the model's latent space using a discrete, localized codebook of edits, without modifying the underlying model weights. To our knowledge, this is the first approach explicitly addressing the generality-locality trade-off in multi-modal model editing. Our comprehensive results confirm the effectiveness of BalancEdit, demonstrating minimal trade-offs while maintaining robust editing capabilities. Our code and dataset will be available.
[ "Multi-modal learning", "Model editing" ]
https://openreview.net/pdf?id=4WsHgA8EG1
https://openreview.net/forum?id=4WsHgA8EG1
ICLR.cc/2025/Conference
2025
{ "note_id": [ "LGYs2AjoXB", "EWoFSuReCP", "9g1R2vpmB6", "7k5woxwdZE" ], "note_type": [ "comment", "official_review", "official_review", "official_review" ], "note_created": [ 1733155602583, 1730389334279, 1730854487504, 1730420048347 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission8608/Authors" ], [ "ICLR.cc/2025/Conference/Submission8608/Reviewer_zYCx" ], [ "ICLR.cc/2025/Conference/Submission8608/Reviewer_56rU" ], [ "ICLR.cc/2025/Conference/Submission8608/Reviewer_CUDx" ] ], "structured_content_str": [ "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"BalancEdit presents a new solution for updating large multi-modal models by achieving a balance between generality and locality in model edits. By introducing the OKEDIT dataset, this approach evaluates and addresses the generality-locality trade-off, a challenge overlooked by other methods. BalancEdit showcases minimal compromise in model performance, offering a robust and efficient approach to knowledge editing without altering the model's core weights.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"The motivation of balancing generality and specificity in model editing is good.\\n\\nThe method for determining the influence radius is simple and requires no extra training, which improves usability. \\n\\nMoreover, the model shows good efficiency in terms of both time and data usage.\\n\\nIntroducing more generality test for each editing case is beneficial.\", \"weaknesses\": \"Using a black or white image as a negative sample is straightforward but may not achieve an optimal balance between generality and locality.\\n\\nThe editing method involves finetuning a layer, which may be simplistic. Additionally, the experimental results lack comparison with the SERAC method.\\n\\nAbout image quality, I have some doubts on diffusion generated images, which are used as tests. From fig 4, the second image is totally different from the first image. From fig 6, the first and third examples of generality test are entirely different from the editing sample, making the test results questionable.\\n\\nThe experiments involve Blip2-OPT and MiniGPT-4. However, considering the fast development of MLLMs, the newer models like LLaVA series, which are widely recognized, should be tested.\", \"questions\": \"As those mentioned in weakness.\", \"additions\": \"How many locality test image for each editing case? If only one image, this can be imbalanced because generality test has 10 images for each.\\n\\nDo you verify the quality of generated images? How do you verify them?\\n\\nWhy don\\u2019t you present the results of SERAC method?\", \"writing_issue\": \"\", \"table_3\": \"Misuse bold texts, some are not best results.\\n\\nA critical issue exists on lines 489-514, where two paragraphs redundantly convey the same information. This appears to be a significant oversight of the content.\\n\\nMissing reference in ine 862\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"No Ethics Concerns\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"Existing knowledge editing methods often overlook the influence scope of a knowledge edit, leading to limited generality and locality about samples similar to the edited ones. This paper proposes a novel method, BalancEdit, to optimize the trade-off between generality and locality. To assess this trade-off, this paper constructed a new dataset, OKEDIT. Experimental results demonstrate that BalancEdit outperforms existing methods in both single and sequential editing settings.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The issues addressed in this paper are of considerable significance. Existing editing methods affect the performance of the edited model on samples related to the edited ones. This paper proposed a new method to adjust the influence radius dynamically. The innovative approach of using positive and negative samples to estimate the influence radius of each knowledge edit is particularly commendable. Additionally, the paper clearly articulates the above issues and presents corresponding solutions.\", \"weaknesses\": \"1. The proposed method builds upon Grace [1], with the key differences being using positive and negative samples to estimate the influence radius and the fine-tuning of the transformation layer. However, the paper does not include an ablation study to evaluate the contributions of these two modules.\\n2. The proposed dataset OKEDIT employs GPT-4 and a diffusion model to generate rephrased images for assessing image generality. However, previous studies have noted that generated images may shift in content, leading to inconsistencies with the original images [2]. \\n3. The use of the harmonic mean (HM) is questionable, as the presence of a minimum can result in a lower harmonic mean. In Table 3, the FT method applied to BLIP2-OPT shows a performance of less than 1% on the locality.\\n\\n[1] Aging with GRACE: Lifelong Model Editing with Discrete Key-Value Adaptors\\n[2] VLKEB: A Large Vision-Language Model Knowledge Editing Benchmark\", \"questions\": \"1.Why is the accuracy for the Base model in Table 3 not 0? In my humble opinion, the Acc and Loc should be 0 and 100 respectively, similar to the results presented in [1].\\n2. Why were sequential editing experiments conducted on OKVQA instead of MMEdit and OKEDIT, as proposed in this paper?\\n3. Previous studies have indicated that the MEND method can produce NaN values [2] during sequential editing; however, this issue does not appear in Table 4. Are there differences in the sequential editing settings between this study and [2]?\\n4. IMHO, if the weights in the language module are edited, it is essential to measure text locality and compare it with other methods.\\n5. The paper states that black images are used as negative samples across various visual recognition tasks. It would be beneficial to include citations to support this approach.\\n6. Some proprietary terms, such as MiniGPT-4 and BLIP-2 OPT, are used inconsistently throughout the text.\\n\\n[1] Can We Edit Multimodal Large Language Models\\uff1f\\n[2] VLKEB: A Large Vision-Language Model Knowledge Editing Benchmark\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N/A\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"BalancEdit introduces a new model editing approach, addressing the limitations of traditional model editing techniques. Unlike existing methods, which often ignore the distinct influence of different facts, BalancEdit strikes an optimal balance between generality and locality. By using a codebook of localized edits and generating both positive and negative samples, it accurately assesses each fact's impact without altering the model's core structure. Tested on the newly developed OKEDIT dataset, BalancEdit demonstrates robust editing performance with minimal trade-offs, marking a significant advance in multi-modal model editing.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The motivation is good, particularly the attention to balancing generality and locality in model edits.\\nThe approach for setting the influence radius is straightforward, requiring no additional training, which enhances usability. \\nAdditionally, the model demonstrates good efficiency in terms of both time and data requirements.\", \"weaknesses\": \"The method\\u2019s visual reasoning goal is limited, offering little differentiation from MMEdit, especially as the test data format remains similar and is based on question answering.\\n\\nUsing a black image as a negative sample is simplistic and may fall short in defining an \\\"optimal balance between generality and locality.\\\" Consequently, the hyperparameter alpha is fixed, potentially limiting flexibility.\\n\\nImages in the generality and locality tests are generated by a diffusion model, which offers limited advancement over MMEdit due to inconsistent image quality.\\n\\nThe study uses Blip2-OPT and MiniGPT-4 as baseline models, which are somewhat outdated and limited. Architectures like LLaVA and related models may yield different results.\", \"writing_issue\": \"There is a major issue on page 10, lines 489-514, where two paragraphs convey the same information, likely due to an unintentional oversight.\", \"typo\": \"Line 723: \\u201clabelis\\u201d\", \"line_862\": \"missing reference\", \"table_3\": \"some bold texts are not best results\\nThe example in figure 4 is confusing, because the first image and the rest two has significant difference, and the main subject is two people rather than a church.\", \"questions\": \"As those mentioned in weakness:\\n\\nDoes the visual reasoning goal in this approach offer substantial differentiation from MMEdit, given the test data's similarity in the form of QA? \\n\\nCould a more sophisticated method replace black images as negative samples to better define the balance between generality and locality?\\n\\nHow significant is the impact of using diffusion model-generated images for testing generality and locality, considering their variable quality? Do you verify the image quality by any means (especially human verification), check if the generated images could be used for test?\\n\\nWould using more recent model architectures, like those in the LLaVA series, yield different results in these experiments?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
4W1wTg7q9o
UrbanWorld: An Urban World Model for 3D City Generation
[ "Yu Shang", "Yuming Lin", "Yu Zheng", "Fan Hangyu", "Jingtao Ding", "Jie Feng", "Jiansheng Chen", "Tian Li", "Yong Li" ]
Cities, as the essential environment of human life, encompass diverse physical elements such as buildings, roads and vegetation, which continuously interact with dynamic entities like people and vehicles. Crafting realistic, interactive 3D urban environments is essential for nurturing AGI systems and constructing AI agents capable of perceiving, decision-making, and acting like humans in real-world environments. However, creating high-fidelity 3D urban environments usually entails extensive manual labor from designers, involving intricate detailing and representation of complex urban elements. Therefore, accomplishing this automatically remains a longstanding challenge. Toward this problem, we propose UrbanWorld, the first generative urban world model that can automatically create a customized, realistic and interactive 3D urban world with flexible control conditions. Specifically, we design a progressive diffusion-based rendering method to produce 3D urban assets with high-quality textures. Moreover, we propose a specialized urban multimodal large language model (Urban MLLM) trained on realistic street-view image-text corpus to supervise and guide the generation process. UrbanWorld incorporates four key stages in the generation pipeline: flexible 3D layout generation from OSM data or urban layout with semantic and height maps, urban scene design with Urban MLLM, controllable urban asset rendering via progressive 3D diffusion, and MLLM-assisted scene refinement. We conduct extensive quantitative analysis on five visual metrics, demonstrating that UrbanWorld achieves state-of-the-art generation realism. Next, we provide qualitative results about the controllable generation capabilities of UrbanWorld using both textual and image-based prompts. Lastly, we verify the interactive nature of these environments by showcasing the agent perception and navigation within the created environments. We contribute UrbanWorld as an open-source tool available at https://github.com/Urban-World/UrbanWorld.
[ "Urban world model", "3D city generation" ]
Reject
https://openreview.net/pdf?id=4W1wTg7q9o
https://openreview.net/forum?id=4W1wTg7q9o
ICLR.cc/2025/Conference
2025
{ "note_id": [ "jn3hXcIyN4", "a1dgrmG0Wm", "O46pEZgcW1", "IkuyoWHnjX", "Ft6PCkE26y", "6eKBKhBKZZ" ], "note_type": [ "official_review", "official_review", "official_review", "decision", "official_review", "meta_review" ], "note_created": [ 1730607748495, 1729377503988, 1730600425217, 1737523623616, 1730305203193, 1733558080654 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission4185/Reviewer_YwTn" ], [ "ICLR.cc/2025/Conference/Submission4185/Reviewer_tHU8" ], [ "ICLR.cc/2025/Conference/Submission4185/Reviewer_gfwE" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission4185/Reviewer_eF1T" ], [ "ICLR.cc/2025/Conference/Submission4185/Area_Chair_e96G" ] ], "structured_content_str": [ "{\"summary\": \"This paper proposes a generative urban world model that can automatically create a customized, realistic, and interactive 3D urban world with flexible control conditions. The code of this work was released.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The task of 3D urban generation is important.\\n2. The method is reasonable and looks to have better quantitative results than previous models.\\n3. The writing is clear and easy to follow.\", \"weaknesses\": \"1. Claim of World Model. This work belongs to the 3D urban generation. It is over-claimed to be a world model and barely related to AGI. Authors should precisely identify the task and topic. Then, focus on the specific topic and make it comprehensive rather than claim some large topics.\\n\\n2. Technical contributions. The motivation of the generation pipeline is unclear. Why do you need a vision language model? What are the special designs in your work different from others, and why do you need them? What is the special challenges that lead you to design the method? So far, the pipeline looks like a combination of recent advanced techniques, i.e., diffusion model and vision language model.\\n\\n3. Visual results. The visual results are insufficient. From only a few images, it can not be convinced that the visual quality is better than other models. Also, Figure 6 and 4 have some reduplicate results.\\n\\n4. Evaluation of interactive environments. The evaluation of interactive environments is coarse. The navigation tasks are not really evaluated. From an image, nothing can be evident. What are the quantitative results, and what are the video results? How do you make the simulation of physics? What is the physics engine? What is the training speed? What is the model, RL or IL? What are the evaluation metrics?\", \"questions\": \"See Weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper introduces UrbanWorld, a generative model designed to automatically create realistic and interactive 3D urban environments, addressing the challenges of manual labor in urban scene design. UrbanWorld employs a progressive diffusion-based rendering method and a specialized urban multimodal large language model (Urban MLLM) trained on street-view image-text data to guide the generation process. The model consists of four key stages: flexible 3D layout generation, urban scene design using Urban MLLM, controllable asset rendering, and scene refinement. Extensive evaluations demonstrate that UrbanWorld achieves state-of-the-art realism and interactivity, outperforming existing methods like Infinicity and CityGen in generating diverse urban environments suitable for embodied agents.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper is well-written and easy to follow.\\n2. The code has been released. \\n3. The overall framework is technically sound. Based on a pre-defined set of common objects in the urban scenario, the framework bridges the gap between the 3D world and 2D views via pre-trained diffusion models. The pipeline is interesting. \\n4. The framework achieves controllable and customizable scene generation, which can support tasks that require agent-environment interactions.\", \"weaknesses\": \"1. Even though superior quantitative results are reported, the generated images are not realistic enough based on the demonstration in the paper.\\n2. It would be better if the authors could provide more diverse qualitative results generated by the proposed method. The proposed system is claimed to be for 3D city generation. It would be good if a sequence of images/video captured with a moving camera is included to show the scene-level generation capability. \\n3. I am confused about the UV unwrapping and UV wrapping parts. How can you ensure the wrapping process can align the texture perfectly to the mesh model? For objects of different types and shapes, I believe this process can be hard to model by the diffusion model. The UV unwrapping is usually not unique. Is there any mechanism to enforce the equivariance to different unwrapping manners? \\n4. I noticed that the Position-awareTexture Completion module is applied to refine the texture map. Can you provide some qualitative results (visualization) to compare the results before and after the refinement?\\n5. The section 4.4 is a little bit vague. How does your generated environment support navigation? How far is the longest distance your navigation can achieve? It could be better to show a bird-eye-view of your navigation environment.\", \"questions\": \"Please refer to the weakness part.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces a method for 3D urban scene creation called UrbanWorld, which facilitates customized and interactive 3D urban world generation. UrbanWorld uses Blender to create untextured 3D layouts from 2D maps and incorporates Urban MLLM to generate textual descriptions for assets. A diffusion-based method is then applied to generate and refine the geometry of the 3D assets.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper is written fluently and is easy to understand.\\n2. The proposed method shows relatively better results in generating city scenes with assets that have new appearances.\\n3. The authors effectively showcase various capabilities of the pipeline.\", \"weaknesses\": \"1. While the authors state that the method achieves \\u201ccustomized, realistic, and interactive 3D urban world generation,\\u201d the results appear more simulation-style and fall short of true realism. The texture quality, as seen in Fig. 3 and 4, is not particularly impressive, and there are no significant improvements over CityDreamer.\\n2. The absence of video results is notable. For a 3D generation task, video demonstrations would better illustrate the quality and realism of the generated scenes.\\n3. Fig. 4 includes scenes with humans and vehicles, but the method of incorporating these assets is unclear. Details on how these elements are introduced and animated within the scene are missing.\\n4. Most visual results focus on limited, local areas. For a city-level generation, it would be beneficial to include bird\\u2019s-eye-view results covering larger spatial regions, similar to CityDreamer.\\n5. Including a user study comparison would provide a clearer assessment of the visual quality of the generated scenes.\\n6. Although the authors claim the ability to create new assets, this appears limited to the level of appearance, with geometry remaining unchanged from the asset library. Given the importance of geometry in 3D generation, this aspect should be addressed.\", \"questions\": \"Please refer to weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"The paper introduces UrbanWorld, a generative model designed for the automatic creation of realistic, customizable, and interactive 3D urban environments. UrbanWorld employs a progressive, four-stage generation pipeline: flexible 3D layout creation, Urban Multimodal Large Language Model (Urban MLLM)-based scene design, diffusion-based asset rendering, and MLLM-driven scene refinement.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"UrbanWorld introduces a pipeline that integrates generative diffusion models with an urban-specific MLLM to achieve realistic urban scene creation. This combination allows for controlled generation of 3D assets and adaptive urban design.\", \"weaknesses\": \"1. The authors claimed section A is flexible urban layout \\u201cgeneration\\u201d. However, this is not like generation methods where the distribution of urban layouts are learned from real-world data [1][2][3]. It seems like the authors are just using OSM\\u2019s GT data (AIGC-layout is not explained anywhere in the paper). No detail is given on how did the authors transform the OSM data or AIGC data into untextured 3D urban environment. Is there any generation models or other networks involved? In short, if you are just using GT data and Blender add-on to import it, you can\\u2019t call the process \\u201cgeneration\\u201d.\\n\\n2. In section 3.2 and the Appendix A.2, the authors shows a general urban generation prompt is converted into prompts for different categories of urban objects. However, the same prompt is generated for all objects of the same class. Doesn\\u2019t that indicate they would have exact same style and appearance? For example, if there were 50 buildings in the scene, and they all share the same T2I prompt, they end up looking the same. Meanwhile, the authors introduced descriptions for all categories are generated by an MLLM, but did not explain where does the reference image comes from.\\n\\n3. For a single asset, the authors generated textures from different views conditioned on the same text and reference image, then merged all textures. This approach cannot guarantee consistency between textures as no 3D condition has been used to guide the 2D diffusion model. Meanwhile, it cannot be called \\u201c3D diffusion renderer\\u201d, since the authors are only inferencing iteratively from pretrained 2D diffusion models. \\n\\n[1] Infinicity: https://arxiv.org/abs/2301.09637\\n[2] CityDreamer: https://arxiv.org/abs/2309.00610\\n[3] CityGen: https://arxiv.org/abs/2312.01508\", \"questions\": \"1. In Figure 3, texture refinement only shows marginal improvement for buildings. Authors should provide more examples, including other objects and views.\\n\\n2. In Figure 4, the authors shows existence of human and vehicles, how are these generated? Are they also assets generated at some stage? Or done by manually post-processing? It is not mentioned anywhere in the paper, and this indicate the visual quality comparison with other methods is completely unfair.\\n\\n3.Since the framework generate 3D scenes, I suggest the authors to submit videos or at least multi-views of the same scene to demonstrate quality and view consistency of the generated scenes.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"**Summary**\\n\\nThe paper presents UrbanWorld, a generative world model for creating interactive 3D urban worlds. The generation consists of four stages: 1) generation of untextured 3D layouts based on input 2D layout, 2) using MLLM to generate textual descriptions detailing the appearance of the assets 3) texturing of urban assets using diffusion based on the generated description 4) scene refinement using a MLLM.\\n\\n**Strengths**\", \"reviewers_noted_the_following_strengths_of_the_work\": \"1. The task of 3D urban generation with controllable and customizable scene generation is important [YwTn,tHU8]\\n2. The method seems to be reasonable [YwTn,tHU8]\\n3. Code is provided [tHU8]\\n4. Comparisons show the proposed method to generate better results than previous models [YwTn,gfwE,eF1T]\\n5. Reviewers mostly found the paper to be well-written and easy to follow [tHU8,eF1T,YwTn]\\n\\n**Weaknesses**\\n\\nReviewers were negative on the submission, and noted that some claims are unsubstantiated, with the main weaknesses being:\\n\\n1. Inaccurate use of terms. For instance, claims of the \\\"World Model\\\", \\\"AGI\\\" is not appropriate [YwTn]. The proposed frame is also not a generative model that learns the distribution, and the use of term \\\"3D diffusion renderer\\\" is inaccurate [eF1T]\\n - Generated worlds does not seem realistic [gfwE, tHU8]\\n2. Weak evaluation\\n - It's hard to determine from the few visual images (and no videos or birds-eye visuals) that the generated world is actually better than prior work [YwTn, gfwE, tHU8]\\n - There was no evaluation of whether the generated environments can be interacted with [YwTn]\\n - No human evaluation [gfwE]\\n3. Some aspects are unclear \\n - How are humans and vehicles incorporated? [gfwE,eF1T]\\n - How are different appearances for different buildings / objects of the same class obtained? [eF1T]\\n - Details of UV unwrapping and wrapping [tHU8]\\n - Details of how navigation is supported [tHU8]\\n4. Design decisions and challenges are not clearly motivated or explained [YwTn]\\n\\n**Recommendation**\\n\\nAs all reviewers were negative on the submission, and there was no author response, the AC recommends reject.\", \"additional_comments_on_reviewer_discussion\": \"There was no author response, and no reviewer discussion.\"}" ] }
4VmagzA2Tp
Improving Molecule-Language Alignment with Hierarchical Graph Tokenization
[ "Yongqiang Chen", "Quanming Yao", "Juzheng Zhang", "James Cheng", "Yatao Bian" ]
Recently there has been a surge of interest in extending the success of large language models (LLMs) to graph modality, such as molecules. As LLMs are predominantly trained with 1D text data, most existing approaches adopt a graph neural network to represent a molecule as a series of node tokens and feed these tokens to LLMs for molecule-language alignment. Despite achieving some successes, existing approaches have overlooked the hierarchical structures that are inherent in molecules. Specifically, in molecular graphs, the high-order structural information contains rich semantics of molecular functional groups, which encode crucial biochemical functionalities of the molecules. We establish a simple benchmark showing that neglecting the hierarchical information in graph tokenization will lead to subpar molecule-language alignment and severe hallucination in generated outputs. To address this problem, we propose a novel strategy called HIerarchical GrapH Tokenization (HIGHT). HIGHT employs a hierarchical graph tokenizer that extracts and encodes the hierarchy of node, motif, and graph levels of informative tokens to improve the graph perception of LLMs. HIGHT also adopts an augmented molecule-language supervised fine-tuning dataset, enriched with the hierarchical graph information, to further enhance the molecule-language alignment. Extensive experiments on **14** molecule-centric benchmarks confirm the effectiveness of HIGHT in reducing hallucination by **40%**, as well as significant improvements in various molecule-language downstream tasks.
[ "molecular-language alignment", "large language models", "hierarchical graph neural networks", "tokenization", "biomolecular studies", "molecule" ]
Reject
https://openreview.net/pdf?id=4VmagzA2Tp
https://openreview.net/forum?id=4VmagzA2Tp
ICLR.cc/2025/Conference
2025
{ "note_id": [ "y6OxfghC2x", "sAic2P1sfO", "rksdqAUL74", "lWm149r2mp", "kIWmkp89yZ", "gP0vVB8RIS", "gH2XqxqFT7", "cHiFHlXPK5", "XLKIAhxtaU", "QxJi7Oe9He", "QnDD9wpC65", "PXlTJ5yM5Y", "JzVCJQrpFi", "Hur54uu7SA", "EhjFf5Lny7", "ESbucFxrZT", "DnyQQ04OYy", "BTXXvC4HY7", "8ztF3mjDub", "5ZrFsQD8MI", "4MNTVWj4TG", "21H7n4R1s0" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_review", "official_comment", "meta_review", "official_comment", "official_review", "official_comment" ], "note_created": [ 1732490051188, 1733144136501, 1732489843141, 1732489769704, 1733085442530, 1732489490545, 1730102306811, 1733145830055, 1732490208972, 1732489591659, 1730691342951, 1732489914213, 1733145295028, 1737523859718, 1733145020580, 1732489700682, 1730656525347, 1732489629148, 1734704873707, 1732490125266, 1730084506899, 1732490149623 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission7740/Authors" ], [ "ICLR.cc/2025/Conference/Submission7740/Authors" ], [ "ICLR.cc/2025/Conference/Submission7740/Authors" ], [ "ICLR.cc/2025/Conference/Submission7740/Authors" ], [ "ICLR.cc/2025/Conference/Submission7740/Reviewer_mTJE" ], [ "ICLR.cc/2025/Conference/Submission7740/Authors" ], [ "ICLR.cc/2025/Conference/Submission7740/Reviewer_P2nZ" ], [ "ICLR.cc/2025/Conference/Submission7740/Authors" ], [ "ICLR.cc/2025/Conference/Submission7740/Authors" ], [ "ICLR.cc/2025/Conference/Submission7740/Authors" ], [ "ICLR.cc/2025/Conference/Submission7740/Reviewer_mTJE" ], [ "ICLR.cc/2025/Conference/Submission7740/Authors" ], [ "ICLR.cc/2025/Conference/Submission7740/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission7740/Authors" ], [ "ICLR.cc/2025/Conference/Submission7740/Authors" ], [ "ICLR.cc/2025/Conference/Submission7740/Reviewer_QiTU" ], [ "ICLR.cc/2025/Conference/Submission7740/Authors" ], [ "ICLR.cc/2025/Conference/Submission7740/Area_Chair_wFv7" ], [ "ICLR.cc/2025/Conference/Submission7740/Authors" ], [ "ICLR.cc/2025/Conference/Submission7740/Reviewer_mY8L" ], [ "ICLR.cc/2025/Conference/Submission7740/Authors" ] ], "structured_content_str": [ "{\"title\": \"Response to Reviewer P2nZ (part 2)\", \"comment\": \"> W2 Zero-shot or few-shot scenarios of the proposed model.\\n**A2** We follow the previous practice in training generalist multimodal language models such as LlaVA [1,2,3], where the model are pretrained with either instruction tuning or held-in task data. We therefore consider the two settings:\\n- The first one is to train the model with all chemical reaction prediction data by three epochs to elicit the format following and the knowledge adaption capabilities of the LGLMs pretrained after stage 1. The model names are with `(all)`.\\n- The second one is to train the model with one chemical reaction prediction task and to generalize to the new unseen chemical reaction task. Specifically, we consider two task generalization setups: a) from retrosynthesis to forward reaction prediction; b) from forward reaction prediction to reagent prediction;\\nThe results are given in the tables below, from which we can find the excellent generalization capabilities of HIGHT.\\n| Reagent Prediction | Exact\\u2b06\\ufe0f | BLEU\\u2b06\\ufe0f | Levenshtein\\u2b07\\ufe0f | RDK\\u2b06\\ufe0f | MACCS\\u2b06\\ufe0f | MORGAN\\u2b06\\ufe0f | Validity\\u2b06\\ufe0f |\\n|-----------------------------|:------:|:-----:|:------------:|:-----:|:------:|:-------:|:---------:|\\n| InstructMol-G | 0.031 | 0.429 | 31.447 | 0.389 | 0.249 | 0.22 | 1 |\\n| InstructMol-G (all) | 0.016 | 0.459 | 29.238 | 0.359 | 0.225 | 0.189 | 0.988 |\\n| HIGHT-G | 0.05 | 0.462 | 28.97 | 0.441 | 0.314 | 0.275 | 1 |\\n| HIGHT-G (all) | 0.090 | 0.570 | 22.512 | 0.483 | 0.372 | 0.333 | 0.999 |\\n| Forward Reaction Prediction | Exact\\u2b06\\ufe0f | BLEU\\u2b06\\ufe0f | Levenshtein\\u2b07\\ufe0f | RDK\\u2b06\\ufe0f | MACCS\\u2b06\\ufe0f | MORGAN\\u2b06\\ufe0f | Validity\\u2b06\\ufe0f |\\n| InstructMol-G | 0.031 | 0.853 | 24.790 | 0.512 | 0.362 | 0.303 | 0.993 |\\n| InstructMol-G (all) | 0.020 | 0.841 | 25.109 | 0.426 | 0.339 | 0.284 | 0.998 |\\n| HIGHT-G | 0.037 | 0.869 | 23.759 | 0.590 | 0.394 | 0.340 | 0.993 |\\n| HIGHT-G (all) | 0.182 | 0.911 | 18.469 | 0.737 | 0.561 | 0.510 | 1 |\\n| Retrosynthesis | Exact\\u2b06\\ufe0f | BLEU\\u2b06\\ufe0f | Levenshtein\\u2b07\\ufe0f | RDK\\u2b06\\ufe0f | MACCS\\u2b06\\ufe0f | MORGAN\\u2b06\\ufe0f | Validity\\u2b06\\ufe0f |\\n| InstructMol-G | 0.001 | 0.835 | 31.359 | 0.447 | 0.277 | 0.241 | 0.996 |\\n| InstructMol-G (all) | 0.000 | 0.806 | 32.128 | 0.292 | 0.234 | 0.202 | 0.985 |\\n| HIGHT-G | 0.008 | 0.863 | 28.912 | 0.564 | 0.340 | 0.309 | 1.000 |\\n| HIGHT-G (all) | 0.097 | 0.888 | 22.098 | 0.713 | 0.522 | 0.487 | 1 |\\n\\n| Retro => Forward | Exact\\u2b06\\ufe0f | BLEU\\u2b06\\ufe0f | Levenshtein\\u2b07\\ufe0f | RDK\\u2b06\\ufe0f | MACCS\\u2b06\\ufe0f | MORGAN\\u2b06\\ufe0f | Validity\\u2b06\\ufe0f |\\n|------------------|:------:|:------------:|:------------:|:------------:|:-------------:|:-------------:|:---------:|\\n| InstructMol-G | 0 | 0.3647834384 | 31.78757515 | 0.2398994628 | 0.1309456921 | 0.1387899167 | 0.998 |\\n| HIGHT-G | 0 | 0.3674502876 | 31.23023023 | 0.3030181836 | 0.1588174116 | 0.1626311445 | 0.999 |\\n| Forward => Rea | Exact\\u2b06\\ufe0f | BLEU\\u2b06\\ufe0f | Levenshtein\\u2b07\\ufe0f | RDK\\u2b06\\ufe0f | MACCS\\u2b06\\ufe0f | MORGAN\\u2b06\\ufe0f | Validity\\u2b06\\ufe0f |\\n| InstructMol-G | 0 | 0.2239567194 | 47.12348178 | 0.1804644832 | 0.04004807569 | 0.05254333143 | 0.988 |\\n| HIGHT-G | 0 | 0.240373805 | 43.45045965 | 0.1780840825 | 0.04057551142 | 0.0512682462 | 0.979\"}", "{\"title\": \"Thank you for your support!\", \"comment\": \"Dear Reviewer mTJE,\\n\\nThank you again for your time and efforts in reviewing our work. Your suggestions do help improve our manuscript a lot! Please feel assured that we will incorporate all the aforementioned results and discussions in our revised version.\\n\\nSincerely,\\n\\nAuthors\"}", "{\"title\": \"Response to Reviewer QiTU (part 3)\", \"comment\": \"> W4. There is no comparison with [a] in Table 4. The results in [a] are superior to all the models from Table 4.\\n\\n**A4** We need to clarify that, according to the ICLR reviewer guideline https://iclr.cc/Conferences/2025/ReviewerGuide : `We consider papers contemporaneous if they are published within the last four months. That means, since our full paper deadline is October 1, if a paper was published (i.e., at a peer-reviewed venue) on or after July 1, 2024, authors are not required to compare their own work to that paper.`, therefore, the referred paper was released in August 2024, and is considered as contemporaneous.\\n\\nIn addition, the solution in the referred paper takes additional external knowledge and adopts a different training and evaluation setup. Meanwhile, the proposed prompting strategy by the referred work could be considered orthogonal to our research as the prompting strategy could also be incorporated into our approach. \\n\\nNevertheless, thank you for bringing us this related work. We have revised our manuscript to cite the referred work.\\n\\n[a] Srinivas, S. S., & Runkana, V. (2024). Crossing New Frontiers: Knowledge-Augmented Large Language Model Prompting for Zero-Shot Text-Based De Novo Molecule Design. arXiv preprint arXiv:2408.11866.\\n\\n> W5. In Table 5, the Mol-instruction has the highest MACCS FTS for the retrosynthesis task. However, a smaller number is balded.\\n\\n**A5** We have fixed the typo in the revised version.\\n\\n> W6. The comparison on MotifHallu is not complete. Please provide a comparison with SMILES-based approaches. \\n\\n**A6.1** We compare the state-of-the-art SMILES-based model GALACTICA 6.7B[2]. Due to the time constraint, we randomly sample 100 molecules that contain 3800 question-answer pairs to conduct the evaluation:\\n| | Avg F1 | Pos F1 | Neg F1 |\\n|----------------|--------|--------|--------|\\n| InstructMol | 52.0 | 97.2 | 11.8 |\\n| HIGHT | 69.1 | 59.8 | 78.4 |\\n| GALACTICA 6.7B | 57.0 | 18.6 | 95.4 |\\n\\nIt can be found that, the SMILES-based approaches still suffer from high hallucination for the positive classes. HIGHT maintains a relatively high robustness against the hallucination to both positive and negative classes.\\n\\n> W6.2 Moreover, the improvement on the MotifHally benchmark is expected, as the proposed approach was explicitly designed to better solve this task.\\n\\n**A6.2** We need to clarify that, without either proper architecture design or instruction tuning, the performance gain at MotifHallu may not be expected, as demonstrated in our ablation studies. Furthermore, **the performance improvements on other downstream tasks are not explicitly designed nor expected**. Nevertheless, it can be observed across all the downstream task performances that resolving the Motif Hallucination issue with proper architecture design and instruction tuning indeed brings consistent and non-trivial performance gains across different downstream tasks, verifying our discussion about the necessity of capturing the intrinsic hierarchical graph information for graph-language alignment.\\n\\n\\n\\n**References**\\n\\n[1] Large language models on graphs: A comprehensive survey, arXiv\\u201923.\\n\\n[2] Specialist or Generalist? Instruction Tuning for Specific NLP Tasks, EMNLP\\u201923.\\n\\n[3] GALACTICA: A Large Language Model for Science, 2022\"}", "{\"title\": \"Response to Reviewer QiTU (part 2)\", \"comment\": \"W3. Taking into consideration that the difference between specialist and generalist models is not clear, the resulting model does not demonstrate performance superior to baselines in most of the experiments.\\n**A3** We need to clarify that it is common the generalist models are not performing better on some tasks than the corresponding specialist models [2]. Nevertheless, due to the integration of the generalist LLMs, the resulting LGLMs are capable of multiple tasks by simply switching adapters. \\n\\nTo further evaluate the generalist capabilities, we follow the previous practice in training generalist multimodal language models such as LlaVA, where the model is pretrained with either instruction tuning or held-in task data. We therefore consider the two settings:\\n- The first one is to train the model with all chemical reaction prediction data by three epochs to elicit the format following and the knowledge adaption capabilities of the LGLMs pretrained after stage 1. The model names are with `(all)`.\\n- The second one is to train the model with one chemical reaction prediction task and to generalize to the new unseen chemical reaction task. Specifically, we consider two task generalization setups: a) from retrosynthesis to forward reaction prediction; b) from forward reaction prediction to reagent prediction;\\nThe results are given in the tables below, from which we can find the excellent generalization capabilities of HIGHT.\\n| Reagent Prediction | Exact\\u2b06\\ufe0f | BLEU\\u2b06\\ufe0f | Levenshtein\\u2b07\\ufe0f | RDK\\u2b06\\ufe0f | MACCS\\u2b06\\ufe0f | MORGAN\\u2b06\\ufe0f | Validity\\u2b06\\ufe0f |\\n|-----------------------------|:------:|:-----:|:------------:|:-----:|:------:|:-------:|:---------:|\\n| InstructMol-G | 0.031 | 0.429 | 31.447 | 0.389 | 0.249 | 0.22 | 1 |\\n| InstructMol-G (all) | 0.016 | 0.459 | 29.238 | 0.359 | 0.225 | 0.189 | 0.988 |\\n| HIGHT-G | 0.05 | 0.462 | 28.97 | 0.441 | 0.314 | 0.275 | 1 |\\n| HIGHT-G (all) | 0.090 | 0.570 | 22.512 | 0.483 | 0.372 | 0.333 | 0.999 |\\n| Forward Reaction Prediction | Exact\\u2b06\\ufe0f | BLEU\\u2b06\\ufe0f | Levenshtein\\u2b07\\ufe0f | RDK\\u2b06\\ufe0f | MACCS\\u2b06\\ufe0f | MORGAN\\u2b06\\ufe0f | Validity\\u2b06\\ufe0f |\\n| InstructMol-G | 0.031 | 0.853 | 24.790 | 0.512 | 0.362 | 0.303 | 0.993 |\\n| InstructMol-G (all) | 0.020 | 0.841 | 25.109 | 0.426 | 0.339 | 0.284 | 0.998 |\\n| HIGHT-G | 0.037 | 0.869 | 23.759 | 0.590 | 0.394 | 0.340 | 0.993 |\\n| HIGHT-G (all) | 0.182 | 0.911 | 18.469 | 0.737 | 0.561 | 0.510 | 1 |\\n| Retrosynthesis | Exact\\u2b06\\ufe0f | BLEU\\u2b06\\ufe0f | Levenshtein\\u2b07\\ufe0f | RDK\\u2b06\\ufe0f | MACCS\\u2b06\\ufe0f | MORGAN\\u2b06\\ufe0f | Validity\\u2b06\\ufe0f |\\n| InstructMol-G | 0.001 | 0.835 | 31.359 | 0.447 | 0.277 | 0.241 | 0.996 |\\n| InstructMol-G (all) | 0.000 | 0.806 | 32.128 | 0.292 | 0.234 | 0.202 | 0.985 |\\n| HIGHT-G | 0.008 | 0.863 | 28.912 | 0.564 | 0.340 | 0.309 | 1.000 |\\n| HIGHT-G (all) | 0.097 | 0.888 | 22.098 | 0.713 | 0.522 | 0.487 | 1 |\\n\\n\\n\\n| Retro => Forward | Exact\\u2b06\\ufe0f | BLEU\\u2b06\\ufe0f | Levenshtein\\u2b07\\ufe0f | RDK\\u2b06\\ufe0f | MACCS\\u2b06\\ufe0f | MORGAN\\u2b06\\ufe0f | Validity\\u2b06\\ufe0f |\\n|------------------|:------:|:------------:|:------------:|:------------:|:-------------:|:-------------:|:---------:|\\n| InstructMol-G | 0 | 0.3647834384 | 31.78757515 | 0.2398994628 | 0.1309456921 | 0.1387899167 | 0.998 |\\n| HIGHT-G | 0 | 0.3674502876 | 31.23023023 | 0.3030181836 | 0.1588174116 | 0.1626311445 | 0.999 |\\n| Forward => Rea | Exact\\u2b06\\ufe0f | BLEU\\u2b06\\ufe0f | Levenshtein\\u2b07\\ufe0f | RDK\\u2b06\\ufe0f | MACCS\\u2b06\\ufe0f | MORGAN\\u2b06\\ufe0f | Validity\\u2b06\\ufe0f |\\n| InstructMol-G | 0 | 0.2239567194 | 47.12348178 | 0.1804644832 | 0.04004807569 | 0.05254333143 | 0.988 |\\n| HIGHT-G | 0 | 0.240373805 | 43.45045965 | 0.1780840825 | 0.04057551142 | 0.0512682462 | 0.979 |\"}", "{\"comment\": \"Thanks for the response. You have solved my concerns through additional experiments and tables. I would like to increase my rating from 5 to 6.\"}", "{\"title\": \"Response to Reviewer mTJE (part 1)\", \"comment\": \"Thank you for your time and insightful suggestions for our paper. Please find our responses to your concerns below.\\n\\n> W1 Whether the performance gains come from the larger tokenizer.\\n\\n**A1** The hierarchical tokenizer in HIGHT takes three distinct tokenizers where each of which shares the same number of parameters and architecture as that in a node-centric tokenizer. To examine whether the additional two times of parameters are the main contributor to the improvements by HIGHT, we conduct additional experiments with a larger node-centric tokenizer that has three times the parameters as the original one. The evaluation results are given in the table below. Due to the time limit, we evaluate only with the three tasks in the chemical reaction analysis. It can be found that, the larger tokenizer can not bring performance improvements.\\n| | Exact\\u2b06\\ufe0f | BLEU\\u2b06\\ufe0f | Levenshtein\\u2b07\\ufe0f | RDK\\u2b06\\ufe0f | MACCS\\u2b06\\ufe0f | MORGAN\\u2b06\\ufe0f | Validity\\u2b06\\ufe0f |\\n|--------------------|-------:|------:|-------------:|------:|-------:|--------:|----------:|\\n| Reagent Prediction | | | | | | | |\\n| InstructMol | 0.031 | 0.429 | 31.447 | 0.389 | 0.249 | 0.22 | 1 |\\n| + Larger Tokenizer | 0.040 |\\t0.454 |\\t29.163\\t| 0.416 |\\t0.284 |\\t0.248 |\\t1.000 |\\n| HIGHT | 0.05 | 0.462 | 28.97 | 0.441 | 0.314 | 0.275 | 1 |\\n| Forward Reaction | | | | | | | |\\n| InstructMol | 0.031 | 0.853 | 24.79 | 0.512 | 0.362 | 0.303 | 0.993 |\\n| + Larger Tokenizer | 0.040\\t|0.861\\t|24.051| 0.544|\\t0.380 |\\t0.328 |\\t0.996 |\\n| HIGHT | 0.037 | 0.869 | 23.759 | 0.59 | 0.394 | 0.34 | 0.993 |\\n| Retrosynthesis | | | | | | | |\\n| InstructMol | 0.001 | 0.835 | 31.359 | 0.447 | 0.277 | 0.241 | 0.996 |\\n| + Larger Tokenizer | 0.001|\\t0.842|\\t30.613| 0.459\\t|0.287 |\\t0.263 |\\t0.999 |\\n| HIGHT | 0.008 | 0.863 | 28.912 | 0.564 | 0.34 | 0.309 | 1 |\\n\\n> W2 Detailed description of the evaluation tasks.\\n\\n**A2** We have revised our manuscript to include a more detailed discussion and description of the evaluation tasks, including the inputs and outputs of each task.\"}", "{\"summary\": \"This paper proposes HIerarchical GrapH Tokenization (HIGHT), which tries to improve how LLMs understand and process molecular data. The key idea of HIGHT is to introduce a hierarchical graph tokenizer that extracts and encodes information at multiple levels: atoms, motifs, and the overall molecule. The paper demonstrates that HIGHT can reduce hallucination and improve performance across various molecular tasks, including property prediction, molecular description generation, and chemical reaction prediction.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. This paper proposes to incorporate hierarchical graph information into LGLMs, and the authors achieve this with new architecture and instruction tuning dataset HiPubChem.\\n2. To address hallucination issue, the paper creates MotifHallu, the first hallucination benchmark based on the existence of common functional groups.\\n3. The paper includes extensive experiments with 14 real-world molecular and reaction comprehension benchmarks. The results show that HIGHT significantly reduces the hallucination on MotifHallu and demonstrates significant improvement on a number of tasks.\", \"weaknesses\": \"1. The hierarchical graph tokenization process, which involves the use of multiple adapters, is likely to be more computationally expensive than traditional node-centric tokenization. The paper does not discuss the computational complexity. Also, LLM is tuned using LORA, and the number of parameters tuned should be discussed.\\n2. One motivation of applying LLMs for graph data is to utillize the generalization capability of LLMs. However, this paper do not provide experimental results on zero-shot or few-shot scenarios of the proposed model. I think it will greatly strength the paper if HIGHT has good performance under such cases.\\n3. The performance of HIGHT will largely depend on the backbone LLMs, and only vicuna-v-1.3-7B is evaulated.\", \"questions\": \"1. At line 273, the authors said \\\"attach positional encodings $p$ to all of the tokens\\\", How are position encodings of motifs obtained?\\n2. If the input graphs use the positional encodings, then, should the original positional encodings in the LLMs be diabled? e.g, the widely used ROPE for the graph input part? \\n3. What is the papameter count to be tuned?\\n4. Besides the vicuna-v-1.3-7B, can the authors provide experimental resutls for other LLM backbones? Since different backbones may have a big impact on the performance.\\n5. How is the proposed model performance for zero-shot or few-shot scenarios?\\n6. In table 2, llama 13b has wrose performance than llama 7b on most of datasets. Also, Galactica-120B has a sharp performance drop on BACE. Any explanations on these results?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"[Gentle Reminder] Discussion period is closing soon\", \"comment\": \"Dear Reviewer mY8L,\\n\\nThank you again for your time and valuable comments on our work. We understand you are busy. To facilitate our discussion, we provide a short summary of our responses to your concerns below:\\n\\n> Novelty of this work\\n\\nThe novelty of this work lies in the systematic investigation of the hierarchical tokenization (as motifs for molecules) in Graph-Language alignment, which has never been discussed by previous works.\\n\\n> The author should show that the method has better downstream performance than simply telling the LLM the existence of functional groups.\\n\\nWe conduct ablation studies showing that the alignment training with HIGHT indeed benefits the downstream performances than simply telling the LLM the existence of functional groups.\\n\\n> Specialist or generalist \\n\\n- We follow the use of terms as InstructMol, that the specialist and generalist nature depends on whether the underlying LLM is a generalist model;\\n- We also provide experiments evaluating the generalist capabilities of HIGHT, showing significant improvements over node-centric tokenization;\\n\\n> Methods without motif known still achieve better performance\\n\\nAs the Graph-Language alignment involves a different training paradigm (i.e., aligning LLM knowledge to understand graphs) and objective than previous methods (i.e., understanding the graphs), they are not directly comparable. **As we show in experiments, for molecule-langauge alignment, telling LLMs the existence of motifs indeed brings lots of benefits to mitigate the hallucination and all the downstream tasks**.\\n\\nPlease kindly let us know if our responses above clarify your concerns. We would sincerely appreciate it if you could jointly consider our responses above when making the final evaluation of our work!\"}", "{\"title\": \"Response to Reviewer mY8L (part 1)\", \"comment\": \"Thank you for your suggestions and time in reviewing our paper. Please find our detailed responses below to your concerns.\\n\\n> W1 Novelty of this work.\\n\\n**A1** We need to clarify that, despite the use of motifs in previous GNNs applied to molecular-related tasks, it remains unknown:\\n- whether motif information is useful for molecule-language alignment;\\n- how to incorporate the motif information to improve molecule-language alignment;\\nAs we show in the ablation study in the response to Reviewer mTJE, without proper architecture design or the instruction tuning dataset proposed in this work, LGLMs can not properly understand the motif information and achieve suboptimal molecule-language alignment.\\n**We are the first to investigate the necessity of incorporating the motif information for molecule-language alignment, which simple extensions of the referred work can not thoroughly identify and resolve the issue**.\\n\\n> W2 The author should justify how this is a helpful task besides simply telling the LLM that \\\"there is such a functional group.\\\" For example, the author should show that the method has better downstream performance than simply telling the LLM the existence of functional groups.\\n\\n**A2** In our experiments, we indeed evaluate the downstream performances of HIGHT compared to the LGLM with node-centric tokenization. The high performance in the motif hallucination benchmark demonstrates that the LGLM can understand the existence of motifs in a molecule, therefore, HIGHT obtains generically high downstream performances.\\nTo further justify the effectiveness of HIGHT, we compare HIGHT to InstructMol+HiPubChem which can be considered by directly telling the LGLM there is such a functional group. In the tasks of chemical reaction prediction shown in table below, **simply telling the LGLM there is such a functional group can not directly help with the downstream task performance. The architecture also matters**.\\n\\n| | Exact\\u2b06\\ufe0f | BLEU\\u2b06\\ufe0f | Levenshtein\\u2b07\\ufe0f | RDK\\u2b06\\ufe0f | MACCS\\u2b06\\ufe0f | MORGAN\\u2b06\\ufe0f | Validity\\u2b06\\ufe0f |\\n|--------------------|-------:|------:|-------------:|------:|-------:|--------:|----------:|\\n| Reagent Prediction | | | | | | | |\\n| InstructMol | 0.031 | 0.429 | 31.447 | 0.389 | 0.249 | 0.22 | 1 |\\n| + HiPubChem | 0.016 | 0.473 | 30.455 | 0.369 | 0.237 | 0.194 | 0.990 |\\n| HIGHT | 0.05 | 0.462 | 28.97 | 0.441 | 0.314 | 0.275 | 1 |\\n| Forward Reaction | | | | | | | |\\n| + PE | 0.010 | 0.829 | 26.623 | 0.419 | 0.328 | 0.268 | 0.981 |\\n| + HiPubChem | 0.011 | 0.819 | 26.010 | 0.396 | 0.315 | 0.264 | 0.975 |\\n| HIGHT | 0.037 | 0.869 | 23.759 | 0.59 | 0.394 | 0.34 | 0.993 |\\n| Retrosynthesis | | | | | | | |\\n| InstructMol | 0.001 | 0.835 | 31.359 | 0.447 | 0.277 | 0.241 | 0.996 |\\n| + HiPubChem | 0.000 | 0.755 | 35.811 | 0.282 | 0.218 | 0.177 | 0.997 |\\n| HIGHT | 0.008 | 0.863 | 28.912 | 0.564 | 0.34 | 0.309 | 1 |\\n\\n> W3 Can you be more specific on the distinctions of the specialist model and the generalist model?\\n\\n**A3** We follow the naming in InstructMol to categorize the baseline models. We need to clarify that, the term we used in the paper is the `LLM Based Generalist Models`, which refers to the LGLMs based on generalist LLMs. **The key distinction between the specialist model and the generalist model is whether the model is built upon generalist LLMs**. The generalist LLMs allow for open-form communication and the resulting LGLMs are capable of multiple tasks by simply switching adapters. We have revised our manuscript to clearly define the term before using it.\\n\\n> Q1. In tables 3, 4, 5, are all baselines also fine-tuned with the same dataset as HIGHT?\\n\\n**A4** As our focus is to demonstrate the superiority of hierarchical graph tokenization, InstructMol is the direct comparable baseline, which follows the same pretraining and finetuning receipt as HIGHT.\\nAs for the other baselines, we follow the previous work [1,2] to conduct the experiments, since they adopt a significantly different model architecture, pretraining paradigm, and pretraining data (some of them are close-sourced).\\n\\n**References**\\n\\n[1] Mol-Instructions: A Large-Scale Biomolecular Instruction Dataset for Large Language Models, ICLR\\u201924.\"}", "{\"title\": \"Response to Reviewer mTJE (part 2)\", \"comment\": \"> Q1. How many parameters do those two tokenizers have respectively?\\n\\n**A3** \\tWe count the number of parameters in different tokenizers, including parameters in the GNN encoder as well as the projector that projects the graph tokens into the dimensions of the language model, shown in the table below:\\n| | graph token dimension | num of params in GNN encoder | num of params in projector | num of params in tokenizer |\\n|--------------|-----------------------|------------------------------|----------------------------|----------------------------|\\n| Node-Centric | 300d | 1,860,905 | 1,232,896 | 3,093,801 |\\n| Node-Centric | 900d | 16,382,705 | 3,690,496 | 20,073,201 |\\n| HIGHT | 300d | 1,865,105 | 3,796,992 | 5,662,097 |\\n\\nIt can be found that HIGHT does not cost many too parameters than the previous node-centric tokenizers. The overall number of parameters is significantly less than that of the LLMs (usually around 7 billion). In addition, when using a node-centric tokenizer that is 4 times larger than HIGHT, the performances remain significantly lower than HIGHT, demonstrating that the number of parameters in the tokenizer is not the key contributor to the improvements of HIGHT.\\n\\n> Q2. What are the ablation study results on other tasks such as property prediction and chemical reaction prediction?\\n\\n**A4** We have revised our manuscript to include the ablation study results on motif hallucination, property prediction, and chemical reaction prediction. For reference, we also append the results below:\\n| | Avg F1 | Pos F1 | Neg F1 |\\n|----------------|--------|--------|--------|\\n| InstructMol | 52.6 | 95.7 | 9.5 |\\n| + PE | 51 | 98.8 | 3.2 |\\n| + HiPubChem | 69.1 | 59.8 | 78.4 |\\n| HIGHT | 66.85 | 85.5 | 48.2 |\\n| - HiPubChem | 54.55 | 96.6 | 12.5 |\\n\\n| | HOMO\\u2b07\\ufe0f | LUMO\\u2b07\\ufe0f | \\\\Delta e\\u2b07\\ufe0f | AVG\\u2b07\\ufe0f |\\n|----------------|:------:|:------:|:---------:|:------:|\\n| InstructMol | 0.0111 | 0.0133 | 0.0147 | 0.013 |\\n| + PE | 0.030 | 0.040 | 0.036 | 0.035 |\\n| + HiPubChem | 0.030 | 3.402 | 0.049 | 1.123 |\\n| HIGHT | 0.0078 | 0.0086 | 0.0095 | 0.0086 |\\n\\n| | Exact\\u2b06\\ufe0f | BLEU\\u2b06\\ufe0f | Levenshtein\\u2b07\\ufe0f | RDK\\u2b06\\ufe0f | MACCS\\u2b06\\ufe0f | MORGAN\\u2b06\\ufe0f | Validity\\u2b06\\ufe0f |\\n|--------------------|-------:|------:|-------------:|------:|-------:|--------:|----------:|\\n| Reagent Prediction | | | | | | | |\\n| InstructMol | 0.031 | 0.429 | 31.447 | 0.389 | 0.249 | 0.22 | 1 |\\n| + PE | 0.009 | 0.423 | 30.833 | 0.370 | 0.231 | 0.197 | 0.986 |\\n| + HiPubChem | 0.016 | 0.473 | 30.455 | 0.369 | 0.237 | 0.194 | 0.990 |\\n| HIGHT | 0.05 | 0.462 | 28.97 | 0.441 | 0.314 | 0.275 | 1 |\\n| Forward Reaction | | | | | | | |\\n| InstructMol | 0.031 | 0.853 | 24.79 | 0.512 | 0.362 | 0.303 | 0.993 |\\n| + PE | 0.010 | 0.829 | 26.623 | 0.419 | 0.328 | 0.268 | 0.981 |\\n| + HiPubChem | 0.011 | 0.819 | 26.010 | 0.396 | 0.315 | 0.264 | 0.975 |\\n| HIGHT | 0.037 | 0.869 | 23.759 | 0.59 | 0.394 | 0.34 | 0.993 |\\n| Retrosynthesis | | | | | | | |\\n| InstructMol | 0.001 | 0.835 | 31.359 | 0.447 | 0.277 | 0.241 | 0.996 |\\n| + PE | 0.000 | 0.792 | 33.859 | 0.295 | 0.218 | 0.192 | 0.983 |\\n| + HiPubChem | 0.000 | 0.755 | 35.811 | 0.282 | 0.218 | 0.177 | 0.997 |\\n| HIGHT | 0.008 | 0.863 | 28.912 | 0.564 | 0.34 | 0.309 | 1 |\\n\\nIt can be found that merely incorporating positional encoding or hierarchical instruction tuning is not sufficient to achieve the same performance as HIGHT. On the contrary, without a proper architecture design as HIGHT, instruction tuning with HiPubChem will confuse LLMs and lead to degenerated downstream task performances.\"}", "{\"summary\": \"This paper presents a new approach to aligning molecular graph representations with language using a method called Hierarchical Graph Tokenization (HIGHT). Traditional graph-language alignment models primarily focus on node-level information, often neglecting the inherent hierarchical structure of molecules, which leads to alignment issues and hallucination in large language models (LLMs).\\n\\nThe authors introduce HIGHT, which utilizes a hierarchical graph tokenizer to capture information at the node, motif, and entire molecule levels. This tokenizer incorporates both atom-level and motif-level tokens, which are then used to improve alignment with language models. To address the alignment of hierarchical molecular data with textual descriptions, the authors also develop an enhanced molecular instruction tuning dataset called HiPubChem, which provides detailed motif information.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The Hierarchical Graph Tokenization (HIGHT) technique is a major advancement. By incorporating hierarchical structure at multiple levels (node, motif, and graph), the paper addresses a crucial gap in previous molecule-language alignment methods, which typically rely only on node-level information. This hierarchical approach captures the functional groups and structural motifs inherent in molecules, improving the model\\u2019s ability to represent complex biochemical properties accurately.\\n\\n2. The introduction of HiPubChem, an augmented molecular instruction tuning dataset enriched with motif and functional group information, enhances model training by aligning molecular structural details with language descriptions. This contribution is valuable for future work in molecular and biochemical language model alignment.\\n\\n3. The effectiveness of each of the two methods was verified through simple ablation studies.\", \"weaknesses\": \"1. The introduction of the hierarchical graph tokenizer seems to make the tokenizer larger compared with the ordinary node-level tokenizer. It should be discussed that whether the performance gain comes from the larger tokenizer.\\n\\n2. There should be more detail descriptions and discussions about the evaluation tasks.\", \"questions\": \"1. How many parameters do those two tokenizers have respectively?\\n2. What are the ablation study results on other tasks such as property prediction and chemical reaction prediction? \\n3. What are the input and output of the molecular property prediction task and other tasks? The performance gain mainly comes from hierarchical graph tokenization, and it has nothing to do with the new tuning dataset, right?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer P2nZ (part 1)\", \"comment\": \"Thank you for your time and valuable comments about our paper. Please find our responses below to your concerns.\\n\\n> W1.1 Computational complexity.\\n\\n**A1.1** For the training and inference, the additional computational overhead mainly lies in the processing of the additional motif tokens. Nevertheless, since the number of motifs is usually less than the number of atoms, it only adds a constant to the overall complexity. \\nAs shown in the table below, we count the average graph size of PubChem and HiPubChem, where HiPubChem adds 9 additional tokens on average. The real preprocessing time and training time are shown below.\\n\\n| | Graph Size | Preprocessing Time | Training Time |\\n|----------------|------|------|------|\\n| PubChem | 34.39 | 16min 32sec | 8hour 17min 59sec | \\n| HiPubChem | 43.21 | 25min 35sec | 15hour 36min 23sec | \\n\\nAlthough tuning HIGHT with HiPubChem requires longer training time, the absolute time remains a reasonable and affordable regime.\\nMeanwhile, we also compare the inference time of InstructMol and HIGHT across 5 realistic tasks.\\n\\n| |Property Prediction | MolCaption|Reagent Prediction| Forward Reaction| Retrosynthesis|\\n|----------------|------|------|------|------|------|\\n| InstructMol | 14min 54sec | 6hour 22min 27sec | 56min 56sec | 1hour 34min 28sec | 1hour 50min 47sec | \\n| HIGHT | 15min 12sec | 4hour 59min 50sec | 50min 29sec | 1hour 22min 08sec | 1hour 49min 42sec | \\n\\nFrom the results, we can find that, during the inference, the LLM latency takes up the majority of time. A well-trained LGLM with HIGHT is able to generate more concise and valid answers and thus may take less time during inference.\\n\\n \\n> W1.2 Number of parameters tuned via LoRA.\\n\\n**A1.2** Here are the number of parameters in each component tunable during the whole pretraining process:\\n- When pretraining the GNN tokenizer, the number of tunable parameters is the number of parameters in GNN encoder;\\n- In stage 1, the number of tunable parameters is the number of parameters in the projector;\\n- In stage 2, the number of tunable parameters is the number of parameters in the projector and in LoRA;\\n\\n| | num of params in GNN encoder | num of params in projector | num of params in LoRA |\\n|--------------|------------------------------|----------------------------|----------------------------|\\n| Node-Centric | 1,860,905 | 1,232,896 | 159,907,840 |\\n| HIGHT | 1,865,105 | 3,796,992 | 159,907,840 |\"}", "{\"title\": \"[Gentle Reminder] Discussion period is closing soon\", \"comment\": \"Dear Reviewer P2nZ,\\n\\nWe are grateful for your time and valuable comments on our work. We understand you are busy. To facilitate our discussion, we provide a short summary of our responses to your concerns below:\\n\\n> Whether InstructMol is node-centric?\\n\\nWe provide details showing that InstructMol, along with many seminal LGLM works, are node-centric.\\n\\n> Computational complexity and parameter scale\\n\\nWe provide a detailed discussion of the complexity in terms of training and inference, along with the parameter scale analysis.\\n\\n> Zero-shot or few-shot performance\\n\\nWe supplement additional experiments evaluating the zero-shot performances of HIGHT, which demonstrates consistent improvements over node-centric tokenization.\\n\\n> Results with other LLM backbones\\n\\nWe train and evaluate HIGHT with Llama-2-7B-chat. The results demonstrate the consistent and significant improvements of HIGHT over node-centric tokenization.\\n\\nPlease kindly let us know if our responses above clarify your concerns. We would sincerely appreciate it if you could jointly consider our responses above when making the final evaluation of our work!\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"[Gentle Reminder] Discussion period is closing soon\", \"comment\": \"Dear Reviewer QiTU,\\n\\nWe would like to thank you again for your time and efforts in reviewing our work. We understand you are busy. To facilitate our discussion, we provide a short summary of our responses to your concerns below:\\n\\n> Whether InstructMol is node-centric?\\n\\nWe provide details showing that InstructMol, along with many seminal LGLM works, are node-centric.\\n\\n> Explanation on the generalist and specialist model\\n\\n- We follow the use of terms as InstructMol, that the specialist and generalist nature depends on whether the underlying LLM is a generalist model;\\n- We also provide experiments evaluating the generalist capabilities of HIGHT, showing significant improvements over node-centric tokenization;\\n\\n> Comparison with the referred work and SMILES-based baselines\\n\\n- We revised our manuscript to include the suggested reference (we will upload it once the permission is open to us), which is orthogonal to our work;\\n- We benchmark one of the state-of-the-art SMILES-based models GALACTICA, which demonstrates hallucination on the motifs existing in the molecule;\\n\\nPlease kindly let us know if our responses above clarify your concerns. We would sincerely appreciate it if you could jointly consider our responses above when making the final evaluation of our work!\"}", "{\"title\": \"Response to Reviewer QiTU (part 1)\", \"comment\": \"We appreciate your time and efforts in reviewing our paper. We believe there is a misunderstanding and are confident to resolve your concerns. Please find our detailed responses below.\\n\\n> W1 Whether InstructMol and previous approaches are using node-centric tokenization.\\n\\n**A1** We need to clarify that, **InstructMol indeed uses the node-centric tokenization**:\\n- In the paragraph below Table 1 in the paper of InstructMol, it states `we extract a graph representation \\\\mathbf{Z}_G\\\\in\\\\mathbb{R}^{N\\\\times d} at the node level`, and `|\\\\mathcal{V}|=N is the total number of atoms`. Therefore, the graph tokens in InstructMol contain $N$ atom tokens, which is node-centric.\\n- In the open-sourced code of InstructMol (https://github.com/IDEA-XL/InstructMol/blob/publish/llava/model/llava_graph_arch.py#L83 ), line83 of the `/llava/model/llava_graph_arch.py`, the node features are the exact inputs to the projector and to the LLM. Therefore, the implementation of InstructMol also takes the node-centric tokenization approach.\\n\\nMoreover, most previous LGLMs use the node-centric tokenization approach when feeding the graph tokens to align to the LLMs. Here we provide a list of representative works under the category of `LLM as Predictor`` in the survey of [1]:\\n| | Molecular Inputs | Tokenization |\\n|---------------------------------------------|-------------------------------|---------------------------|\\n| HIGHT | Molecule graph | Hierarchical tokenization |\\n| SMILES-BERT (Wang et al., 2019) | SMILES | N/A |\\n| MolGPT (Bagal et. al., 2021) | SMILES | N/A |\\n| KV-PLM (Zeng et al., 2022) | SMILES | N/A |\\n| Chemformer (Irwin et. al., 2022) | SMILES | N/A |\\n| MFBERT (Abdel-Aty and Gould, 2022) | SMILES | N/A |\\n| MolT5 (Edwards et al., 2022) | SMILES | N/A |\\n| Text+Chem T5 (Christofidellis et al., 2023) | SMILES | N/A |\\n| MolXPT (Liu et al., 2023) | SMILES | N/A |\\n| RT (Born and Manica, 2023) | SMILES | N/A |\\n| CaR (Qian et. al., 2023) | SMILES | N/A |\\n| GPT-MolBERTa (Balaji et al., 2023) | SMILES | N/A |\\n| GIMLET(Zhao et al., 2023) | Molecule graph | Node-centric tokenization |\\n| InstructMol (Cao et al., 2023) | Molecule graph | Node-centric tokenization |\\n| MolCA (Liu et al., 2024) | SMILES&Molecule graph | Node-centric tokenization |\\n| 3D-MoLM (Li et al., 2024) | SMILES&Molecule graph | Node-centric tokenization |\\n| MolTC (Fang et. al., 2024) | SMILES&Molecule graph | Node-centric tokenization |\\n| GraphGPT(Tang et al., 2024) | Neighbor Graph | Node-centric tokenization |\\n| LLaGA (Chen et al., 2024) | Neighbor Graph | Node-centric tokenization |\\n| GraphLLM (Chai et al., 2023) | Textual description and graph | Node-centric tokenization |\\n\\nFrom the table, we can find that, most of the recent works in LGLMs take a node-centric tokenization approach.\\n\\n> W2. A better explanation for specialist and generalist models.\\n\\n**A2** We follow the naming in InstructMol to categorize the baseline models. We need to clarify that, the term we used in the paper is the `LLM Based Generalist Models`, which refers to the LGLMs based on generalist LLMs. **The key distinction between the specialist model and the generalist model is whether the model is built upon generalist LLMs**. The generalist LLMs allow for open-form communication and the resulting LGLMs are capable of multiple tasks by simply switching adapters. We have revised our manuscript to clearly define the term before using it.\"}", "{\"summary\": \"The authors study Large Graph Language Models (LGLM). Drawing inspiration from Multimodal LLMs, authors focus on the task of incorporating graph data as a separate modality with a GNN encoder and an adapter. Authors conclude that node-centric tokenization of molecules leads to LLM hallucinations when asked about the presence of specific fragments. To overcome this issue, the authors propose to enrich the molecule's description by adding the tokens corresponding to BRICKS-fragments that are present in the molecule. The experimental results demonstrate that such a tokenization scheme reduces the amount of motif-related hallucinations and improves performance on other tasks.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"An improved tokenization of molecular graphs that enriches molecule's description with motif tokens.\", \"weaknesses\": \"Specifically, most existing LGLMs directly take the node tokens from GNNs as inputs to LLMs (Cao et al., 2023):\\nThe paper cites InstructMol as a previous approach that utilizes node-centric tokenization. However, if I understand correctly, InstructMol takes the embedding of the whole graph along with the SMILES representations of the molecule. Moreover, it is not clear which previous models use the node-centric tokenization and whether there are such models at all.\\n\\nSection 4.3 describes the fine-tuning approach that involves two stages, where the second stage is the finetuning on MoleculeNet, CheBI-20 and Mol-instructions specialized datasets. In my opinion, this implies that the resulting model is specialized. Please, provide better explanation for specialist and generalist models.\\t \\t \\t \\t\\t\\n\\nTaking into consideration that the difference between specialist and generalist models is not clear, the resulting model does not demonstrate performance superior to baselines in most of the experiments.\\n\\nThere is no comparison with [1] in Table 4. The results in [1] are superior to all the models from Table 4.\\n\\nIn Table 5, the Mol-instruction has the highest MACCS FTS for the retrosynthesis task. However, a smaller number is balded.\\n\\nThe comparison on MotifHallu is not complete. Please provide comparison with SMILES-based approaches. Moreover, the improvement on the MotifHally benchmark is expected, as the proposed approach was explicitly designed to better solve this task.\\n\\n[1] Srinivas, S. S., & Runkana, V. (2024). Crossing New Frontiers: Knowledge-Augmented Large Language Model Prompting for Zero-Shot Text-Based De Novo Molecule Design. arXiv preprint arXiv:2408.11866.\", \"questions\": \"Listed in Cons.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer mTJE (part 3)\", \"comment\": \"> Q3.1 What are the input and output of the molecular property prediction task and other tasks?\\n\\n**A5.1** In Appendix B, we provided the details and examples for each dataset and task incorporated in our evaluation.\\nTo further improve the clarity, we have revised our manuscript to include the details about the inputs and outputs for the molecular property prediction tasks and other tasks. For reference, we provide a brief summary below:\\n| | input | output |\\n|------------------------------------------------|-----------------------------------------------------------|-------------------|\\n| motif hallucination | molecule and question about the existence of a motif | yes or no |\\n| molecular property prediction (classification) | molecule and question about the existence of the property | yes or no |\\n| molecular property prediction (regression) | molecule and question about the value of the property | property value |\\n| molecular caption | molecule and question asking for the molecular caption | molecular caption |\\n| chemical reaction prediction | molecules and question about the reaction | molecular results |\\n\\n> Q3.2 The performance gain mainly comes from hierarchical graph tokenization, and it has nothing to do with the new tuning dataset, right?\\n\\n**A5.2** From the ablation studies in the response `A4`, we can find that both hierarchical graph tokenization and the HiPubChem tuning dataset are necessary to the performance improvements. The lack of either one of them (i.e., improving node-centric tokenization with merely one of the techniques) can not effectively recover the performance of HIGHT. The main reason is that, using merely one of the techniques, may cause even more confusion to the LGLM and lead to alignment and performance degeneration in the downstream tasks.\"}", "{\"metareview\": \"While this paper proposes a novel hierarchical graph tokenization method (HIGHT) to address shortcomings in molecular-language alignment, it ultimately fails to meet the bar for acceptance at ICLR due to some concerns as follows.\\n1. Inadequate Justification of Impact: The manuscript does not convincingly demonstrate that the proposed method's improvements on downstream tasks arise from meaningful advancements in model design, as opposed to task-specific tuning or dataset augmentation. Some experimental results suggest that simply adding motifs as input offers similar benefits without architectural changes.\\n\\n2. Lack of Robustness Across Models and Tasks: Results indicate that the improvements are highly contingent on specific experimental setups (e.g., Vicuna 7B backbone) and may not generalize across alternative LLM architectures or unseen tasks. This limits the broader applicability of the approach.\\n\\n3. Evaluation: Despite extensive experimentation, key benchmarks and comparisons are missing. For instance, there is insufficient evaluation against SMILES-based or contemporary molecule-text alignment methods. Additionally, some baselines are tuned differently, complicating fair comparisons. The method's practical impact is hindered by increased complexity and resource demands (e.g., additional preprocessing, longer training times) relative to the modest performance gains reported.\\n\\nGiven these concerns, a rejection recommendation is made.\", \"additional_comments_on_reviewer_discussion\": \"1. Novelty and Contribution\", \"raised_concerns\": \"Reviewers noted the lack of comparisons with some state-of-the-art SMILES-based and molecule-language alignment methods. Questions were also raised about the model\\u2019s performance on zero-shot and few-shot tasks.\\nAuthors\\u2019 Response: The authors included comparisons with SMILES-based models like GALACTICA and demonstrated HIGHT\\u2019s robustness in few-shot scenarios. They also clarified their evaluation methodology for baselines.\", \"evaluation\": \"These additional results were appreciated, but the comparisons highlighted that HIGHT\\u2019s improvements were limited to specific setups and tasks, reducing its general impact.\\n\\nAll the points contribute to my final decision.\"}", "{\"title\": \"Response to Reviewer P2nZ (part 3)\", \"comment\": \"> W3 Other LLM backbone.\\n\\n**A3** We conduct additional experiments with the other LLM backbone, Llama-2-7b-chat, and evaluate the performance of LGLMs with node-centric tokenization and with HIGHT on motif hallucination, as well as chemical reaction prediction benchmarks. The results are given in below, from which we could still find the consistent and significant performance of HIGHT with another LLM backbone:\\n\\n| | Avg F1 | Pos F1 | Neg F1 |\\n|----------------|--------|--------|--------|\\n| InstructMol | 52.6 | 95.7 | 9.5 |\\n| InstructMol+Llama-2-7b-chat | 51.2 | 99.6 | 2.8 |\\n| HIGHT | 55.9 | 85.5 | 48.2 |\\n| HIGHT+Llama-2-7b-chat | 60.2 | 55.1 | 65.2 |\\n\\n| Reagent Prediction | Exact\\u2b06\\ufe0f | BLEU\\u2b06\\ufe0f | Levenshtein\\u2b07\\ufe0f | RDK\\u2b06\\ufe0f | MACCS\\u2b06\\ufe0f | MORGAN\\u2b06\\ufe0f | Validity\\u2b06\\ufe0f |\\n|-------------------------------|:------:|:-----:|:------------:|:-----:|:------:|:-------:|:---------:|\\n| InstructMol-G | 0.031 | 0.429 | 31.447 | 0.389 | 0.249 | 0.220 | 1.000 |\\n| InstructMol-G+Llama-2-7b-chat | 0.016 | 0.454 | 28.961 | 0.352 | 0.220 | 0.179 | 0.982 |\\n| HIGHT-G | 0.050 | 0.462 | 28.970 | 0.441 | 0.314 | 0.275 | 1.000 |\\n| HIGHT-G+Llama-2-7b-chat | 0.057 | 0.495 | 26.591 | 0.453 | 0.333 | 0.293 | 1.000 |\\n| Forward Reaction Prediction | Exact\\u2b06\\ufe0f | BLEU\\u2b06\\ufe0f | Levenshtein\\u2b07\\ufe0f | RDK\\u2b06\\ufe0f | MACCS\\u2b06\\ufe0f | MORGAN\\u2b06\\ufe0f | Validity\\u2b06\\ufe0f |\\n| InstructMol-G | 0.031 | 0.853 | 24.790 | 0.512 | 0.362 | 0.303 | 0.993 |\\n| InstructMol-G+Llama-2-7b-chat | 0.015 | 0.801 | 25.129 | 0.409 | 0.328 | 0.279 | 0.945 |\\n| HIGHT-G | 0.037 | 0.869 | 23.759 | 0.590 | 0.394 | 0.340 | 0.993 |\\n| HIGHT-G+Llama-2-7b-chat | 0.042 | 0.873 | 23.854 | 0.590 | 0.402 | 0.344 | 0.996 |\\n| Retrosynthesis | Exact\\u2b06\\ufe0f | BLEU\\u2b06\\ufe0f | Levenshtein\\u2b07\\ufe0f | RDK\\u2b06\\ufe0f | MACCS\\u2b06\\ufe0f | MORGAN\\u2b06\\ufe0f | Validity\\u2b06\\ufe0f |\\n| InstructMol-G | 0.001 | 0.835 | 31.359 | 0.447 | 0.277 | 0.241 | 0.996 |\\n| InstructMol-G+Llama-2-7b-chat | 0.000 | 0.767 | 34.589 | 0.275 | 0.215 | 0.181 | 0.989 |\\n| HIGHT-G | 0.008 | 0.863 | 28.912 | 0.564 | 0.340 | 0.309 | 1.000 |\\n| HIGHT-G+Llama-2-7b-chat | 0.006 | 0.865 | 28.964 | 0.563 | 0.338 | 0.306 | 0.999 |\\n\\n\\n> Q1. In line 273, the authors said \\\"attach positional encodings to all of the tokens\\\", How are position encodings of motifs obtained?\\n\\n**A4** As illustrated via Eq 7, HIGHT will first construct a new graph with the motif as \\u201csuper nodes\\u201d added into the original graph, with the edges connected to the nodes in the motif. The positional encodings are calculated based on the new graph with \\u201csuper nodes\\u201d. Therefore, the positional encodings of the motif super nodes are the positional encodings of the motifs.\\n\\n> Q2. If the input graphs use the positional encodings, then, should the original positional encodings in the LLMs be disabled? e.g, the widely used ROPE for the graph input part?\\n\\n**A5** Since the original LLM is trained with the LM positional encoding such as ROPE, which has a significantly different representational property from the graph positional encodings, disabling the original positional encoding may severely affect the original LLM capabilities. Therefore, we mainly add the graph positional encodings to the graph tokens before they are projected to the LLM representation space, which can be considered as a concatenation to the original positional encoding, to improve the representation quality of the graph tokens.\\n\\n> Q3. What is the parameter count to be tuned?\\n\\n**A6** Please kindly refer to our response in **A1.2**.\\n\\n> Q4. Besides the vicuna-v-1.3-7B, can the authors provide experimental results for other LLM backbones? Since different backbones may have a big impact on the performance.\\n\\n**A7** Please kindly refer to our response in **A3**.\\n\\n> Q5. How is the proposed model performance for zero-shot or few-shot scenarios?\\n\\n**A8** Please kindly refer to our response in **A2**.\\n\\n> Q6. In Table 2, llama 13b has a worse performance than llama 7b on most of the datasets. Also, Galactica-120B has a sharp performance drop on BACE. Any explanations for these results?\\n\\n**A9** We directly take the results of Llama-13B, Llama-7B and Galactica from the existing literature[1,3,4]. The performance drop from small LLMs to large LLMs may be caused by the reduced overfitting to small datasets (e.g., BACE, BBBP) and improved understanding of challenging tasks (e.g., HIV).\"}", "{\"summary\": \"This paper proposes a framework named HIGHT to align molecular data with LLMs. It identifies a shortcoming of LLM on learning functional groups, and proposes to extend the graph tokenization to motif level. Specifically, its input to the LLM includes node/atom embeddings as well as motif embeddings. The model is fine-tuned with motif prediction tasks on a dataset constructed using RDKit. The model shows good performance on molecule properties prediction compared to language models.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The overall presentation is clear and the paper is easy to follow. The work proposes a complete pipeline to build a model with stronger motif/functional group querying ability. Using motif tokens is a straight-forward solution to enhance such ability. Various experiments are conducted to validate the model.\", \"weaknesses\": [\"From the novelty and contribution perspective, taking motif representations/tokens is not new. By simply searching on Google, I found several papers that extract motifs for graph modeling [1, 2] (as the author also mentioned in the paper). This work is a simple extension of these techniques to align the motifs to the LLM.\", \"If I understand correctly, the motif tokenization algorithm, BRICS, will break the molecule in a very chemistry-aligned way. For example, a \\\"OH\\\" functional group will be tokenized into a motif. The downstream task of identifying the functional group will be very easy (simply aligning a single motif token with the text description of the function group, and the task is like asking \\\"does a -OH motif have -OH functional group\\\"). The author should justify how this is a helpful task besides simply telling the LLM that \\\"there is such a functional group.\\\" For example, the author should show that the method has better downstream performance than simply telling the LLM the existence of functional groups.\", \"The distinction between specialist model and generalist model is arbitrary to me. Methods like MolFM and Text+Chem T5-augm-base have the same functionality as the proposal, yet they achieved better performance than HIGHT. I think the HIGHT is more specialized, as it requires explicit and specialized atom and motif tokenization. Can you be more specific about the distinction, and what's the advantage of a generalist model?\", \"Even without the motif tokens, many models achieved stronger performance. Can you explain why a better motif prediction ability does not lead to better downstream performance? Link back to weakness 1, does this also mean that the proposed task is too easy for the motif tokenization, preventing the model from learning meaningful/molecule-property-related from the pretraining process?\", \"[1] Zhang, Zaixi, et al. \\\"Motif-based graph self-supervised learning for molecular property prediction.\\\" Advances in Neural Information Processing Systems 34 (2021): 15870-15882.\", \"[2] Chen, Xuexin, et al. \\\"Motif graph neural network.\\\" IEEE Transactions on Neural Networks and Learning Systems (2023).\"], \"questions\": [\"In tables 3, 4, 5, are all baselines also fine-tuned with the same dataset as HIGHT?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer P2nZ (part 4)\", \"comment\": \"**References**\\n\\n[1] GIMLET: A Unified Graph-Text Model for Instruction-Based Molecule Zero-Shot Learning, NeurIPS\\u201923.\\n\\n[2] Visual Instruction Tuning, NeurIPS\\u201923.\\n\\n[3] InstructMol: Multi-Modal Integration for Building a Versatile and Reliable Molecular Assistant in Drug Discovery, arXiv\\u201923.\\n\\n[4] MolecularGPT: Open Large Language Model (LLM) for Few-Shot Molecular Property Prediction, arXiv\\u201924.\\n\\n[5] GALACTICA: A Large Language Model for Science, 2022\"}" ] }
4VfPLTqdrq
Understanding Scale Shift in Domain Generalization for Crowd Localization
[ "Juncheng Wang", "Lei Shang", "Ziqi Liu", "wanglu", "Zhe Hu", "Xixu HU", "Shujun Wang" ]
Crowd localization plays a crucial role in visual scene understanding towards predicting each pedestrian location in a crowd, thus being applicable to various downstream tasks. However, existing approaches suffer from significant performance degradation due to differences in head scale distributions (scale shift) between training and testing data, a challenge known as domain generalization (DG). This paper aims to comprehend the nature of scale shift within the context of domain generalization for crowd localization models. To this end, we address three key questions: (i) how to quantify the scale shift influence on DG task, (ii) why does this influence occur, (iii) how to mitigate the influence. Specifically, we first establish a benchmark, ScaleBench, and reproduce 20 advanced DG algorithms, to quantify the influence. Through extensive experiments, we demonstrate the limitations of existing algorithms and highlight the under-explored nature of this issue. To further understand its behind reason, we provide a rigorous theoretical analysis on scale shift. Building on this analysis, we further propose a simple yet effective algorithm called Semantic Hook to mitigate the influence of scale shift on DG, which also serves as a case study revealing three significant insights for future research. Our results emphasize the importance of this novel and applicable research direction, which we term $\textit{Scale Shift Domain Generalization}$.
[ "Crowd Localization", "Domain Generalization", "Scale Shift" ]
Reject
https://openreview.net/pdf?id=4VfPLTqdrq
https://openreview.net/forum?id=4VfPLTqdrq
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zXiq1wlvwF", "u0tJQPAcq2", "mp1wwPjBQO", "XRObuKElja", "K3wi56p4NQ", "DMWenLK4AA", "9Du8PVLI7H", "07QrK7H1iC" ], "note_type": [ "official_review", "official_comment", "official_review", "meta_review", "official_review", "official_review", "official_review", "decision" ], "note_created": [ 1730625956241, 1733158174035, 1730567376730, 1734577145889, 1730685645635, 1730700317926, 1730719602503, 1737523546075 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission2971/Reviewer_gsTA" ], [ "ICLR.cc/2025/Conference/Submission2971/Reviewer_pL1Y" ], [ "ICLR.cc/2025/Conference/Submission2971/Reviewer_Hohg" ], [ "ICLR.cc/2025/Conference/Submission2971/Area_Chair_afKu" ], [ "ICLR.cc/2025/Conference/Submission2971/Reviewer_p2oH" ], [ "ICLR.cc/2025/Conference/Submission2971/Reviewer_RxyN" ], [ "ICLR.cc/2025/Conference/Submission2971/Reviewer_pL1Y" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ] ], "structured_content_str": [ "{\"summary\": \"The paper focuses on the impact of scale variations on crowd localization models' ability to generalize across datasets. To tackle this, the authors introduce ScaleBench, a new benchmark dataset specifically curated to study scale shift, and evaluate 20 existing domain generalization algorithms, showing that many struggle with this type of shift. They also propose an approach, Semantic Hook, aimed at mitigating scale shift by strengthening the association between semantic features and predictions, rather than relying on scale information. While the improvements are modest, the paper offers valuable insights into scale-based generalization challenges.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1) The development of ScaleBench is a major contribution, offering a curated dataset specifically designed to study scale shift effects on domain generalization.\\n2) The paper introduces the Semantic Hook as a novel approach to reduce the impact of scale shift in domain generalization tasks.\\n3) The paper is well-structured and logically organized. Offering theoretical insights and comprehensive empirical evaluations.\", \"weaknesses\": \"1) The paper frames the issue of scale shift as a new challenge within domain generalization for crowd localization, but this framing seems overstated. Scale shift, where different head sizes (scales) impact model performance across datasets, is not fundamentally new. Previous works have already explored the impact of scale variation on domain adaptation in crowd analysis (albeit under different terminologies), suggesting that this issue is more a subset of a well-studied generalization problem rather than a novel concept. Claiming it as the \\\"first study\\\" on \\\"scale shift domain generalization\\\" could be seen as an attempt to rebrand existing challenges without sufficient justification.\\n\\n2) The proposed \\\"Semantic Hook\\\" technique to mitigate scale shift claims to enhance the association between semantic features and task predictions, but its practical effectiveness remains questionable. This method involves adding Gaussian noise to \\\"hook\\\" relevant features, yet the theoretical rationale behind this approach is underdeveloped. How \\\"Semantic Hook\\\" contributes to decoupling scale-related biases from semantic content is unclear. Additionally, the improvement in F1 scores presented in Table 2 is marginal, suggesting that the Semantic Hook might not be a robust solution.\\n\\n3) While the paper provides a comparison of 20 domain generalization algorithms, there is little discussion about the practical differences in their robustness against scale shifts. The Semantic Hook\\u2019s performance is only marginally better than ERM, raising doubts about its practical value. Furthermore, the experiments rely heavily on F1 scores across ScaleBench domains but do not include additional evaluation metrics (e.g., precision, recall) that could provide a fuller picture of model performance under scale shift.\\n\\n4) ScaleBench, with its scale-based domain partitions, may not accurately reflect real-world applications where scale distributions are more complex and continuous rather than discretely defined. The Leave-One-Out approach used for evaluation also artificially simplifies the generalization challenge. Real-world scenarios often involve more nuanced and diverse shifts between training and deployment environments, suggesting that the paper\\u2019s evaluation may lack external validity.\", \"questions\": \"1) How does \\\"scale shift\\\" in crowd localization fundamentally differ from other types of domain shifts?\\n2) How does Semantic Hook compare with simpler baseline methods, such as multi-scale training or augmentations?\\n3) Can the spurious association between scale and the output be quantified?\\n4) How would ScaleBench and Semantic Hook perform in real-world crowd localization scenarios with continuous scale distributions?\\n5) What are the limitations of ScaleBench in generalizing to diverse crowd analysis tasks?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Since the authors did not engage in the rebuttal process, I am inclined to maintain a somewhat negative rating.\"}", "{\"summary\": \"This paper analyzes domain generalization under scale shift in crowd localization, where object scales vary across domains. To address the lack of benchmarks for studying scale-related shifts, the authors introduce Scale Bench. This benchmark divides data into domains based on scale and evaluates models on their ability to generalize to unseen scales. They propose Semantic Hook, a training method that uses noise perturbations to reduce scale reliance and strengthen semantic associations in model predictions.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1- The paper is well-written and engaging, and the ideas flow smoothly.\\n\\n2-The field of crowd counting/localization would benefit from an analytical work focused on the issue of scale variance, as scale shifts present a significant challenge for model generalization across diverse domains. This paper addresses this gap, and provides both a theoretical framework and a practical benchmark.\", \"weaknesses\": \"1- There is limited mention (with brief explanation for each) of papers explicitly in crowd counting/localization fields that tackle the issue of scale variance.\\n\\n2- While the paper provides a comprehensive benchmark for scale-related domain generalization, it lacks coverage of crowd counting/localization methods specifically designed for domain generalization. How many of the methods in Table 2 discuss scale variance for crowd counting specifically?\\n\\n3- The paper does not clarify what has been done to prevent overfitting, particularly given the possible complexity of the models relative to the training data provided. \\n\\n4- Although Tables 6-18 and a brief discussion for each are included in the appendix, the paper lacks an in-depth analysis explaining why certain methods outperform others in specific cases. A discussion of these results would add valuable context to understand the strengths and limitations of each approach under different scale conditions. what could be the issue that each of these methods fail in generalizing to new domain?\", \"questions\": \"1-What is the difference between semantic hook and other methods that also purturb the image for crowd counting/localization?\\n\\n2- What other methods specifically in crowd counting and/or localization exist that have addressed the scale variance? Have any of these methods been implemetned in this paper?\\n\\n3- Why does each of the previous methods fail in addressing this issue? What is the authors insight in this matter?\\n\\n4- How were the hyperparameters for each model set? Did the authors use grid search?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This paper was reviewed by five experts in the field. The final ratings are 3,5,5,5,5. Reviewers generally agree that the scale shift is an important and interesting problem in crowd localization. Reviewers also raised many concerns, including insufficient explanation of the proposed method, limited experiments, etc. There is no rebuttal from the authors, so there is no ground to overrule reviewers' recommendations.\", \"additional_comments_on_reviewer_discussion\": \"The authors did not provide rebuttal\"}", "{\"summary\": \"This paper aims to study the effect of scale shift in crowd localization. To this end, a benchmark, dubbed ScaleBench, is first established to quantify the influence of scale shift. Next, SemanticHook is proposed to tackle scale shift. The key idea is to enhance the association between semantic features and targets by perturbing the input image with Gaussian noise. Empirical analyses on ScaleBench justify the effect of scale shift.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. A controlled benchmark is established to study scale variance in crowd localization.\\n2. This paper proposes SemanticHook to handle scale shift.\\n3. Comprehensive analyses are presented to quantify the influence of scale shift.\", \"weaknesses\": \"1. The rationality of ScaleBench is questionable. First, while the perspective effect often occurs in crowd localization, there exist images that are captured from different angles (e.g., top view). In such scenarios, image distribution regularization may fail to partition the images correctly. Second, in the real world, scale shift is often coupled with other factors, such as occlusion, weather, and appearance. For example, when the object suffers from significant appearance variations, the counting model may fail to localize objects even if training and testing data yield the same scale distribution. Third, dividing images into patches will inevitably result in incomplete objects, which could affect the localization results. Therefore, evaluations on ScaleBench may not rigorously reflect the influence of scale shift.\\n2. The proposed SemanticHook does not exhibit superiority over existing methods. As shown in Table 1, the simplest baseline ERM already achieves good results. The proposed method is not necessarily better than ERM.\\n3. Following the previous comment, the rationale of SemanticHook is not entirely convincing. Eq. 6 suggests that p(s, c, \\u2026) can lead to a spurious association between the output y and scale c. This term is a joint distribution of semantic s and scale c. However, the authors merely try to enhance the semantic association between semantic s and output y. Experimental results demonstrate that such a technique does not address scale shift effectively. Additionally, perturbing image is not a new idea, which is widely used in adversarial attack.\\n4. It appears that the influence of image interpolation is not rigorously quantified in Table 4. First, the implementation of Random Augmentation shall be modified according to different domains, i.e., the range of random scaling should be customized based on domain Tiny, Small, and Normal. Second, it is necessary to train the model using different source domains to identify the effect of image interpolation. The results on domain Big are insufficient to conclude that the benefits of image interpolation are modest.\\n5. Regarding training details. In practice, random scaling is commonly used to alleviate scale variations. As the authors use this technique to train the model, the reported results may not correctly reveal the effect of scale shift, because random scaling already simulates different scales.\\n6. The paper lacks evaluations on previous methods featuring multi-scale architecture, e.g., STEER. Evaluations on these methods are helpful in revealing whether previous methods can handle scale variations.\", \"questions\": \"1. What does global feature mean in Table 5? Figure 2 shows that semantic features are extracted from the encoder. How to extract global feature?\\n2. Is the proposed method sensitive to the choice of gamma?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper investigates the impact of scale shift on domain generalization in crowd localization. Models experience performance degradation due to head scale distribution shifts in training and testing datasets. To address this, the authors provide a theoretical analysis of the scale shift under domain generalization and introduce a novel method to mitigate the effect of scale shift, called Semantic Hook. The paper proposes a new benchmark called ScaleBench and provides bounding box annotations for existing public crowd benchmarks.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The paper addresses an under-explored issue of scale shift in domain generalization for crowd localization. In terms of contributions, the paper delivers manually annotated bounding boxes for crowd localization on existing public crowd benchmarks. The paper is well-structured and provides a good analysis of the problem, resulting in a novel solution method called Semantic Hook. Further, the authors take an analytical route for the scale shift under domain generalization connecting other attributes present in datasets.\", \"weaknesses\": \"The paper needs more detailed explanations regarding how Semantic Hook mitigates the scale shift in domain generalization. It also needs to clarify which variables or attributes are being generalized from the perturbation added during training. Additionally, the improvements from the proposed method on the ScaleBench benchmark are marginal compared to the baseline method. Furthermore, the mathematical formulations (Eq. 6) used for the theoretical analysis need to be corrected.\", \"questions\": \"1. Explain how Semantic Hook handles the scale shift on domain generalization. From the given formulation, the semantic difference will highlight the effect of the perturbation, and the decoder is now learning to map noise to task-specific outcomes. So, how does Semantic Hook reduce generalization risk?\\n2. The conditional probability derived in Eq. 6 is incorrect at the first integral. The conditional probability P(y|x) does not equal integrating P(y|z) over the domain of Z. Please provide the correct derivation.\\n3. The scale shift is more prevalent in crowd images under perspective projection; however, the scale is more uniform throughout the scene for aerial views. How does the proposed method handle different projections?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper presents a significant contribution to domain generalization (DG) for crowd localization by addressing the challenge of scale shift, where differences in head size distributions between training and testing data impact model performance. The authors introduce ScaleBench which categorizes datasets based on scale distributions. They also propose Semantic Hook, an algorithm designed to mitigate scale shift by reinforcing the association between semantic features and task predictions. Through testing 20 state-of-the-art DG algorithms on ScaleBench and conducting theoretical analysis, the authors highlight the limitations of current approaches and introduce Scale Shift Domain Generalization as a novel research direction.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The identification of scale shift as a specific domain shift challenge in crowd localization, and the introduction of Scale Shift Domain Generalization, bring attention to an under-explored issue with significant real-world implications. ScaleBench provides a standardized benchmark, adding practical value for the research community.\\n2. The authors provide a clear theoretical explanation linking scale shift to diversity and correlation shifts, elucidating why DG models struggle with this issue. This rigorous analysis adds depth to the understanding of scale shift and its implications for DG.\\n3. The paper is well-organized, with each section following logically from the last. The clear delineation between problem identification, analysis, and solution makes the contributions easy to follow.\", \"weaknesses\": \"1. The introduction lacks highlighting core contributions and findings.\\n2. Although the authors indicate that the paper does not primarily focus on introducing a new method, the experiments in the main text feel somewhat limited. \\n3. The appendix contains several formatting issues, particularly with tables. Inconsistencies include varying font sizes, tables floating in the middle of pages, and some tables exceeding the page width. These layout problems affect readability and detract from the paper's presentation quality.\", \"questions\": \"I have no further questions beyond those outlined in the weaknesses section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}" ] }
4VNfufHtoS
Test-time Correction with Human Feedback: An Online 3D Detection System via Visual Prompting
[ "Zetong Yang", "Hanxue Zhang", "Yanan SUN", "Li Chen", "Fei Xia", "Fatma Guney", "Hongyang Li" ]
This paper introduces Test-time Correction (TTC) system, a novel online 3D detection system designated for online correction of test-time errors via human feedback, to guarantee the safety of deployed autonomous driving systems. Unlike well studied offline 3D detectors frozen at inference, TTC explores the capability of instant online error rectification. By leveraging user feedback with interactive prompts at a frame, e.g., a simple click or draw of boxes, TTC could immediately update the corresponding detection results for future streaming inputs, even though the model is deployed with fixed parameters. This enables autonomous driving systems to adapt to new scenarios flexibly and decrease deployment risks reliably without additional expensive training. To achieve such TTC system, we equip existing 3D detectors with OA module, an online adapter with prompt-driven design for online correction. At the core of OA module are visual prompts, images of missed object-of-interest for guiding the corresponding detection and subsequent tracking. Those visual prompts, belonging to missed objects through online inference, are maintained by the visual prompt buffer for continuous error correction in subsequent frames. By doing so, TTC consistently detects online missed objects and immediately lowers down driving risks. It achieves reliable, versatile, and adaptive driving autonomy. Extensive experiments demonstrate significant gain on instant error rectification over pre-trained 3D detectors, even in challenging scenarios with limited labels, zero-shot detection, and adverse conditions. We hope this work would inspire the community to investigate online rectification systems for autonomous driving post-deployment. Code would be publicly shared.
[ "Autonomous Driving", "3D Object Detection", "Test-time Error Correction" ]
https://openreview.net/pdf?id=4VNfufHtoS
https://openreview.net/forum?id=4VNfufHtoS
ICLR.cc/2025/Conference
2025
{ "note_id": [ "nPAY72UxgF", "mSaNBFpQj8", "YKHplfYCs3", "VhN5FiQcyd", "UdV5FyDE5y", "SQv3pYtL4K", "LkpRdWUmxH" ], "note_type": [ "official_review", "official_review", "official_review", "comment", "official_review", "official_review", "official_review" ], "note_created": [ 1731048970242, 1730612847377, 1730725344279, 1731600651795, 1730701294029, 1730705787251, 1730712949562 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission665/Reviewer_ECUi" ], [ "ICLR.cc/2025/Conference/Submission665/Reviewer_R52R" ], [ "ICLR.cc/2025/Conference/Submission665/Reviewer_cfWC" ], [ "ICLR.cc/2025/Conference/Submission665/Authors" ], [ "ICLR.cc/2025/Conference/Submission665/Reviewer_2P2Q" ], [ "ICLR.cc/2025/Conference/Submission665/Reviewer_d1Gg" ], [ "ICLR.cc/2025/Conference/Submission665/Reviewer_Mvta" ] ], "structured_content_str": [ "{\"summary\": \"The authors introduce the Test-time Correction (TTC) system, an innovative online 3D detection framework designed for correcting test-time errors in real-time through human feedback. TTC demonstrates the capability for immediate error rectification. Extensive experiments show substantial improvements in real-time error correction over pre-trained 3D detectors, even in challenging scenarios involving limited labels, zero-shot detection, and adverse conditions.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. Model flexibility: Accepts both monocular and multi-view data and supports any combination of various prompts (object, box, point, and novel visual prompts).\\n2. Clarity of writing: The paper is well-written, logically structured, and easy to read.\\n3. Extensive experiments: The main text and supplementary materials provide ample experiments to validate the effectiveness of TTC.\", \"practical_feasibility\": \"The authors explain real-world application scenarios, achieving immediate error rectification through user-friendly prompts.\", \"weaknesses\": \"1. OA is one of the core modules of the model, serving as a bridge between prompts and offline-trained 3D detectors. However, the explanation of OA in the method section is somewhat abstract; adding simple illustrative diagrams could aid understanding.\\n2. In the Related Work section, the Online 3D Detection System subsection discusses online 3D detectors. Expanding on offline 3D detectors would help readers better understand the development of offline versus online 3D detection.\\n3. There are some minor typos in the text that need correction.\", \"questions\": \"It is recommended to include the full term \\\"Online Adapter (OA)\\\" the first time OA is mentioned in the abstract.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces Test-time Correction (TTC) system, a online 3D detection system designated for online correction of test-time errors via human feedback. The proposed TTC system includes two components: Online Adapter (OA) that enables 3D detectors with visual promotable ability, and a visual prompt buffer that records missing objects. Experiments were conducted on the nuScenes dataset, focusing on the TTC system across various 3D detectors and in out-of-training-distribution scenarios.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. This paper is clearly structured and written-well.\\n2. This paper focuses on an interesting issue, namely that a online 3D detection system designated for online correction of test-time errors via human feedback.\", \"weaknesses\": \"1. The rationale behind this task setup requires further discussion. Since the proposed task involves human feedback, this process is often uncontrollable, making it challenging to ensure real-time performance. This limitation affects the feasibility of applying the task in real-world autonomous driving scenarios.\\n2. The rationale behind the EDS evaluation metric requires further discussion. Classification accuracy is also crucial for autonomous driving, and focusing solely on localization performance while neglecting classification performance is not realistic.\\n3. The proposed method is only applicable to vision-based autonomous driving solutions, limiting its generalizability to LiDAR-based autonomous driving systems.\", \"questions\": \"1. Please refer to the Paper Weaknesses mentioned above.\\n2. The experimental section only conducts multiple tasks on MonoDETR; however, multi-view 3D detection is currently the mainstream approach in autonomous driving solutions. It is recommended to include experiments on mainstream multi-view 3D detectors across various tasks.\\n3. The proposed method focuses solely on correcting missed detections, yet false positives are also a significant issue in autonomous driving. Is there scalability for correcting false positives?\\n4. The experimental section lacks comparisons with existing instruction-based 3D detection methods typically utilize text, boxes, or clicks as prompts; it is recommended to include such comparisons.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper presents a 3D object adaptation method that could adapt the human feedback online. The system could work with various visual prompts including reference images, box in the image and click in the image. The proposed methods is validated on both in domain and out of domain dataset, and demonstrating its effectiveness in these scenarios.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"1. The designed validation experiments are comprehensive.\\n2. The proposed method is training-free, which makes it can be broadly applied.\", \"weaknesses\": \"1. The proposed can only handle missing objects. It seems it cannot reduce the FP in test time.\\n2. Although the authors explain the differences between proposed TTC and single object tracking, the explanation is unconvincing. The visual and object prompt can be easily used in SOT setting. Such discussion and at least comparisons with bbox annotations are must. This is the key concern in my evaluation.\", \"questions\": \"1. The paper does not clearly present how the proposed modules are trained, especially for the two layers MLP in key online adapter module.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"This paper presents the Test-time Correction (TTC) system, an online 3D detection framework designed to correct test-time errors through real-time human feedback. Unlike conventional offline static 3D detectors, TTC aims to learn real-time error rectification by incorporating user feedback (e.g., clicks or bounding boxes). This approach allows for immediate updates to detection results for subsequent streaming inputs, even when the model operates with fixed parameters. The TTC system is achieved by integrating an OA module, an online adapter with a prompt-driven architecture, into existing 3D detectors for real-time correction. The key is visual prompts, specifically images of missed objects, which guide both current detection and future tracking. These visual prompts, representing objects missed during inference, are stored in a buffer to support continuous error correction across frames. Extensive experiments reveal substantial improvements in immediate error rectification compared to pre-trained 3D detectors, even under limited labeling, zero-shot detection, and challenging environmental conditions.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. This paper is well written, and easy to understand.\\n\\n2. Enhancing 3D detection is important task for autonomous driving.\\n\\n3. The performance gain is impressive.\", \"weaknesses\": \"1. Motivation\\n\\nPretrained static 3D detection modules have clear limitations due to issues like domain gaps, and I agree with the goal of improving these. However, I find it difficult to fully empathize with the motivation behind TTC. If issues in the predictions of the 3D detection module indeed pose a significant impact on safety, as stated, would there realistically be an opportunity to perform online correction in such scenarios?\\n\\nAdditionally, I am skeptical about the feasibility of interventions like visual prompting during driving. Operating devices such as navigation systems manually while driving is likely a legal violation in most countries, and in practice, the difficulty level for performing such tasks during driving seems exceedingly high.\\n\\n2. Comparison with TTA or TTT\\n\\nIn this field, there are various approaches for online improvement of pre-trained static models, such as test-time adaptation (TTA) and test-time training (TTT). Notably, most of these methods function without the need for human feedback. A thorough methodological and performance comparison with these approaches is essential. Additionally, while TTT may be somewhat challenging, in the case of TTA, it seems feasible to utilize human feedback as a direct learning guidance. I would appreciate a more in-depth discussion on this aspect as well.\\n\\n3. Robustness\\n\\nIt is unrealistic to expect that user corrections will always be accurate. Depending on the situation, incorrect user interventions could potentially worsen the proposed TTC. It would be beneficial to model the noise that might exist in visual prompting and demonstrate that TTC can operate robustly even when this noise becomes more pronounced.\", \"questions\": \"Please refer to the weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper presents the Test-time Correction (TTC) system, a novel online 3D detection framework designed to correct test-time errors through human feedback. This approach aims to enhance the safety of deployed autonomous driving systems.\\n\\nThe key idea of the TTC system is to improve existing 3D detectors with the Online Adapter (OA) module, a prompt-driven design that facilitates real-time error correction. Central to the OA module are visual prompts\\u2014images of missed objects of interest that guide corresponding detection and subsequent tracking. These visual prompts are stored in a visual prompt buffer to enable continuous error correction in subsequent frames. This approach allows the TTC system to consistently detect missed objects in real-time, thereby effectively reducing driving risks.\\n\\nExperimental results show that the proposed method, through test-time rectification, enhances the performance of offline monocular detectors (Zhang et al., 2022a), multi-view detectors (Wang et al., 2023c), and BEV detectors (Yang et al., 2023a) without the need for additional training.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The broader impact of this paper may lie in inspiring the research community to further investigate the online rectification approach in autonomous driving systems. This crucial technology has the potential to significantly enhance the safety and reliability of safety-critical applications.\", \"weaknesses\": \"The paper lacks self-containment. For example, in lines 217-231, where the authors describe various visual prompts, they mainly reference numerous other methods without offering sufficient detail. This heavy reliance on external sources renders the paper somewhat incremental, as it fails to clearly articulate the novel contributions and context of the visual prompts within the proposed framework. Furthermore, this lack of clarity results in the use of many notations, such as \\\"visual features\\\" and \\\"image features,\\\" without providing clear definitions.\\n\\n\\nRather than referring to it as \\\"visual prompts,\\\" the pipeline developed in this paper essentially provides a template containing location and size information in the buffer, enabling generic tracking during test time without any additional training. Therefore, the authors are encouraged to clarify whether this pipeline fundamentally differs from a single-object tracker. Additionally, it would be beneficial to include an experiment comparing state-of-the-art (SOTA) trackers for test-time correction as part of the evaluation\", \"questions\": \"\\\"To achieve such TTC system, we equip existing 3D detectors with OA module, an online adapter with prompt-driven design for online correction.\\\" However, the acronym \\\"OA\\\" is not defined in the abstract.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces a Test-time Correction (TTC) method that leverages human feedback to correct errors in real-time during testing. The core component, the Online Adapter (OA) module, enables existing 3D detectors to use visual prompts for continuously detecting previously undetected 3D objects.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. Clear and well-structured writing, and easy to understand.\\n2. Significant performance improvements across multiple 3D detectors and comprehensive ablation studies validate module effectiveness.\\n3. The OA module is simple but effective.\", \"weaknesses\": \"1. Re-entering the regions of undetected targets as visual prompts introduces human knowledge, which may lead to potential biases and affect the fairness of comparative experiments.\\n2. The TTC method needs to maintain a buffer of visual cues and solve the matching problem between cues and target objects, which increases the complexity.\\n3. The experimental section lacks a description of how the visual prompts used during testing are obtained.\", \"questions\": \"1. How are the visual prompts used during testing obtained\\uff1fI'm not sure if it's just adding the corresponding areas of undetected targets to the visual cue buffer.\\n2. Could the TTC method be combined with LLM-based 3D detection approaches to enhance generalization for novel object categories and domain shifts?\\n3. How does the TTC method handle potential noise and latency in user feedback?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}" ] }
4VHiptx7xe
STRAP: Robot Sub-Trajectory Retrieval for Augmented Policy Learning
[ "Marius Memmel", "Jacob Berg", "Bingqing Chen", "Abhishek Gupta", "Jonathan Francis" ]
Robot learning is witnessing a significant increase in the size, diversity, and complexity of pre-collected datasets, mirroring trends in domains such as natural language processing and computer vision. Many robot learning methods treat such datasets as multi-task expert data and learn a multi-task, generalist policy by training broadly across them. Notably, while these generalist policies can improve the average performance across many tasks, the performance of generalist policies on any one task is often suboptimal due to negative transfer between partitions of the data, compared to task-specific specialist policies. In this work, we argue for the paradigm of training policies during deployment given the scenarios they encounter: rather than deploying pre-trained policies to unseen problems in a zero-shot manner, we non-parametrically retrieve and train models directly on relevant data at test time. Furthermore, we show that many robotics tasks share considerable amounts of low-level behaviors and that retrieval at the "sub"-trajectory granularity enables significantly improved data utilization, generalization, and robustness in adapting policies to novel problems. In contrast, existing full-trajectory retrieval methods tend to underutilize the data and miss out on shared cross-task content. This work proposes STRAP, a technique for leveraging pre-trained vision foundation models and dynamic time warping to retrieve sub-sequences of trajectories from large training corpora in a robust fashion. STRAP outperforms both prior retrieval algorithms and multi-task learning methods in simulated and real experiments, showing the ability to scale to much larger offline datasets in the real world as well as the ability to learn robust control policies with just a handful of real-world demonstrations.
[ "dynamic time warping", "few-shot imitation learning", "retrieval", "foundation models" ]
Accept (Poster)
https://openreview.net/pdf?id=4VHiptx7xe
https://openreview.net/forum?id=4VHiptx7xe
ICLR.cc/2025/Conference
2025
{ "note_id": [ "ztijBOBl54", "xtUqcPachy", "sgdXfkZOnh", "s7gc9w87Fh", "m1GxnRQh4B", "kN12heVBCB", "jRE1ApkfuM", "jIVJNlcOJW", "f2SSOszNjo", "esdpB8KT6p", "dSumYEJ17W", "YVZ9zvWOjf", "VOyXqeaPqp", "QmEFpNz4l4", "MVQP1yRSBX", "L6oNwkDdYm", "KioYifBuFr", "KPXaJbwvPx", "JLFwHyGtTs", "GM4g10qfvA", "9qujm10D4R", "0eLiZLrBXL" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "decision", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_review", "meta_review", "official_comment" ], "note_created": [ 1732338066491, 1732337729862, 1732337923455, 1732338210634, 1732533489701, 1732725459682, 1732337821202, 1732436809425, 1732652698992, 1732629074317, 1729366962624, 1732638285494, 1737524148959, 1732739255371, 1732338138304, 1730702771379, 1732496791978, 1730703416662, 1732337583512, 1730647940265, 1734400595913, 1732337337091 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission11830/Authors" ], [ "ICLR.cc/2025/Conference/Submission11830/Authors" ], [ "ICLR.cc/2025/Conference/Submission11830/Authors" ], [ "ICLR.cc/2025/Conference/Submission11830/Authors" ], [ "ICLR.cc/2025/Conference/Submission11830/Reviewer_Ndo3" ], [ "ICLR.cc/2025/Conference/Submission11830/Authors" ], [ "ICLR.cc/2025/Conference/Submission11830/Authors" ], [ "ICLR.cc/2025/Conference/Submission11830/Reviewer_HWGB" ], [ "ICLR.cc/2025/Conference/Submission11830/Reviewer_RToH" ], [ "ICLR.cc/2025/Conference/Submission11830/Reviewer_RToH" ], [ "ICLR.cc/2025/Conference/Submission11830/Reviewer_HWGB" ], [ "ICLR.cc/2025/Conference/Submission11830/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission11830/Reviewer_JdfR" ], [ "ICLR.cc/2025/Conference/Submission11830/Authors" ], [ "ICLR.cc/2025/Conference/Submission11830/Reviewer_JdfR" ], [ "ICLR.cc/2025/Conference/Submission11830/Authors" ], [ "ICLR.cc/2025/Conference/Submission11830/Reviewer_Ndo3" ], [ "ICLR.cc/2025/Conference/Submission11830/Authors" ], [ "ICLR.cc/2025/Conference/Submission11830/Reviewer_RToH" ], [ "ICLR.cc/2025/Conference/Submission11830/Area_Chair_fCkJ" ], [ "ICLR.cc/2025/Conference/Submission11830/Authors" ] ], "structured_content_str": [ "{\"comment\": \"Dear Reviewer,\\n\\nThank you for your feedback on how we can improve our paper. We provide additional discussions on the computational complexity of our retrieval algorithm, an extensive ablation of hyperparameter $K$, and a detailed comparison to recent retrieval systems.\\n\\n**Computational cost to adapt to new scenes**\\n\\nCurrent generalist policies are unable to adapt zero-shot to the target task. A common approach is to fine-tune a pre-trained policy on a few target demonstrations (FT). While recent generalist policies suggest fine-tuning on 10-150 demonstrations [1] we only have access to 3-5 demonstrations in our few-shot setting. While the pre-training can provide faster adaptation, fine-tuning on a small number of demonstrations is unable to cover the randomization the policy is exposed to during evaluation (cf. added FT baselines in [Table 1,2 & 3]). By retrieving relevant data, we expose our expert to a much larger training distribution making it more robust to the deployment scenario.\\n\\nSTRAP and a standard fine-tuning approach differ in the retrieval process and longer policy training. Our discussion in [A.5] shows that the retrieval scales linearly with the number of (sub-)trajectories in D_prior and D_target. Greater speedups can be achieved by parallelizing the retrieval and utilizing GPUs to compute the DTW distance matrix. Overall, STRAP\\u2019s runtime is longer compared to fine-tuning (\\\\~5min) consisting of retrieval (\\\\~5min on a DROID-scale dataset) and training a policy (\\\\~30min) but provides significant robustness benefits shown by an average performance boost of $+6.4$% and $+25.7$% across all LIBERO-10 and real-world Kitchen tasks, respectively.\\n\\n[1] OpenVLA: An Open-Source Vision-Language-Action Model. Kim et al. 2024\\n\\n**Computational cost of the retrieval process**\\n\\nTo avoid excessive compute during test time, we precompute the embeddings for D_prior! Encoding a single image running DINOv2 on an NVIDIA L40S 46GB and batch size 32 takes $2.83ms\\\\pm 0.08$ (average across 25 trials). The wall clock time for encoding the entire DROID dataset (\\\\~18.9M timesteps, single-view) therefore sums up to only \\\\~26h. Every dataset only has to be encoded once when added to D_prior and can be reused for all future deployment scenarios. In contrast to previous methods (cf. BehaviorRetrieval, FlowRetrieval), using an off-the-shelf vision foundation model also eliminates the need to re-train the encoder and re-encode the entire dataset when D_prior grows.\\n\\nRetrieving data with S-DTW scales linearly with the size of D_prior, allowing for retrieval within \\\\~5min even from the largest available datasets like DROID. Finally, STRAP\\u2019s policy learning stage is independent of the size of D_prior and only depends on the amount of retrieved data $K$, making it more scalable than common pre-training + fine-tuning or multi-task approaches that have to be re-trained when new trajectories are added to D_prior. We thoroughly discuss the complexity of our retrieval algorithm and benchmark the runtime of S-DTW in [A.5]. Note that there are several possible ways to improve our implementation, e.g., leveraging GPU deployment and custom CUDA kernels to compute the distance matrix or parallelizing retrieval across trajectories.\"}", "{\"comment\": \"**Computational cost, memory, and scalability to real-world datasets**\\n\\nSTRAP\\u2019s retrieval stage is based on S-DTW which consists of two stages: computing the distance matrix $D$ and finding the shortest path via dynamic programming.\\nThese stages have to be run sequentially for each sub-trajectory in D_target and trajectory in D_prior but don\\u2019t depend on the other (sub-)trajectories. Therefore, STRAP has a runtime complexity of $\\\\mathcal{O}(N*M)$ with N the number of sub-trajectories in D_target and M the number of trajectories in D_prior.\\n\\nWe thoroughly benchmark the runtime of S-DTW in [A.5]. For an offline dataset the size of \\nDROID (76k), retrieval takes approximately $300sec$ using our unoptimized research code. Note that there are several possible ways to improve our implementation, e.g., leveraging GPU deployment and custom CUDA kernels to compute the distance matrix or parallelizing retrieval across trajectories.\\n\\nSince we encode D_prior using vision foundation models, the memory footprint of STRAP stays fairly low. Loading the embeddings to compute the cost matrix represents the largest memory consumption but is much lower than the memory consumed by loading image or video sequences.\\n\\nFor runtime visualization and information regarding data encoding and policy learning complexity please refer to [A.5] in the appendix.\\n\\n**Choice of hyperparameter $K$**\\n\\nThank you for this suggestion! We extensively ablate STRAP for $K \\\\in (100, 200, 500, 1000, 2000, 4000)$ on all LIBERO-10 tasks. We find tuning $K$ to improve our previously reported success rates ($K=100$) on 8/10 tasks by an average of $7.5$%. The results suggest that the optimal value for $K$ is task-dependent with some tasks benefiting from retrieving less (4/10) and some from retrieving more (4/10) data. We hypothesize that the optimal $K$ depends on whether tasks leverage (positive transfer) or suffer (negative transfer) from multi-task training. We update our results in [Tables 1 & 3] with the improved results. You can find additional details and visualizations in [Table 8] and [Figure 29] in the Appendix.\\n\\nThe choice of $K$ has no impact on the computational complexity of STRAP as we always compute all matches between D_target and D_prior before selecting the top $K$. This also means that the hyperparameter search on $K$ can be reduced to policy learning and evaluation by storing the matches and retrieving $K$ segments for each iteration.\\n\\n**Cross-embodiment generalization**\\n\\nWe leave cross-embodiment retrieval to future work but emphasize that the embedding choice allows for some steerability of the retrieved sub-trajectories. For instance, averaging the embeddings of all three cameras leads to retrieving very similar scenes [see \\u201cDoes retrieval scale to DROID?\\u201d on our website] while using embeddings of only the in-hand camera focuses much more on the manipulated object and can be embodiment-agnostic. While the retrieval might scale to multiple embodiments, training cross-embodied policies is much more challenging, and leveraging cross-embodied data is an open research problem.\\n\\n\\n\\nWe hope these additional modifications have addressed your previous questions. Please don\\u2019t hesitate to let us know if you have any additional comments or questions.\"}", "{\"comment\": \"**Few-shot demos and model training at test time**\\n\\nCurrent generalist policies are unable to adapt zero-shot to the target task. A common approach is to fine-tune a pre-trained policy on few target demonstrations (FT). While recent generalist policies suggest fine-tuning on 10-150 demonstrations [1] we only have access to 3-5 demonstrations in our few-shot setting. While the pre-training can provide faster adaptation, fine-tuning on 3 demonstrations is unable to cover the $20\\\\times20cm$ grid of possible object poses the policy is exposed during evaluation (cf. added FT baselines in [Table 1,2 & 3]). By retrieving relevant data, we expose our expert to a much larger training distribution making it more robust to the deployment scenario.\\n\\nSTRAP and a standard fine-tuning approach differ in the retrieval process and longer policy training. Our discussion in [A.5] shows that the retrieval scales linearly with the number of (sub-)trajectories in D_prior and D_target. Greater speedups can be achieved by parallelizing the retrieval and utilizing GPUs to compute the DTW distance matrix. Overall, STRAP\\u2019s runtime is longer compared to fine-tuning (\\\\~5min) consisting of retrieval (\\\\~5min on a DROID-scale dataset) and training a policy from scratch (\\\\~30min) but provides significant robustness benefits shown by an average performance boost of $+6.4$% and $+25.7$% across all LIBERO-10 and real-world Kitchen tasks, respectively.\\n\\n[1] OpenVLA: An Open-Source Vision-Language-Action Model. Kim et al. 2024\\n\\n\\n**Cross-environment and -embodiment generalization**\\n\\nWe provide examples of sub-trajectories retrieved from the DROID dataset on our website [see \\u201cDoes retrieval scale to DROID?\\u201d]. STRAP retrieves trajectories collected in environments with similar appearance, e.g., camera pose, table orientation, and texture, and similar tasks, e.g., picking up cylindrical objects. The choice of embedding allows for further steerability of the retrieved sub-trajectories. For instance, averaging the embeddings of all three cameras leads to retrieval of very similar scenes, while using embeddings of only the in-hand camera focuses much more on the manipulated object and can be embodiment-agnostic. While the retrieval could scale to multiple embodiments, training cross-embodied policies is much more challenging, and leveraging cross-embodied data is an open research problem that we leave to future work.\\n\\nWe hope these additional modifications have addressed your previous questions. Please don\\u2019t hesitate to let us know if you have any additional comments or questions.\"}", "{\"comment\": \"Dear Reviewer,\\n\\nThank you for your feedback and for acknowledging our \\u201ccompelling intuition\\u201d and \\u201cthorough experiments\\u201d.\\n\\n**Project incompleteness**\\n\\nWe\\u2019ve updated the link to the website: https://strapaper.github.io/strap.github.io/ Furthermore, we\\u2019ve added more real-world experiments and improved baselines to the main paper and extended the appendix with experimental details and discussion on the computational complexity and choice of hyperparameters. \\n\\n**Visual readability and formatting**\\n\\nWe acknowledge that some of the sections lacked proper formatting. We\\u2019ve improved the formatting by increasing the margins for figures, equations, and tables. Furthermore, we\\u2019ve updated [Figure 2] to be more interpretable and added pointers from the visual stages to the different sections in the paper. Could you please comment on whether these changes are sufficient and clarify what other changes to the writing quality and visual readability you would like to see?\\n\\n**Generalization to other imitation learning methods**\\n\\nWe\\u2019ve run additional experiments using alternative architectures and imitation learning methods. Unfortunately, we found simpler architectures (MLP+GMM, LSTM+GMM) to struggle with achieving non-zero success rates on the LIBERO benchmark or limited by their capability to be conditioned on language (Diffusion Policies). While vision-language-action models experience much better language-conditioning, they require significantly more fine-tuning data than available in our setting [3]. Therefore, we chose transformer-based policies since they observe much better language-conditioning and multi-task capabilities as shown in [1,2].\\n\\n[1] Libero: Benchmarking knowledge transfer for lifelong robot learning. Liu et al, 2024\\n\\n[2] BAKU: An Efficient Transformer for Multi-Task Policy Learning. Haldar et al. 2024\\n\\n[3] OpenVLA: An Open-Source Vision-Language-Action Model. Kim et al. 2024\\n\\nWe hope these additional modifications have addressed your previous questions. Please don\\u2019t hesitate to let us know if you have any additional comments or questions.\"}", "{\"comment\": \"Thank you for your efforts in the rebuttal. The new experiments and baselines address my initial concerns. Thus, I raise my score to 6.\", \"i_got_one_follow_up_question\": \"How does FT with Strap retrieved trajectories do on LIBERO-10 and real world setting?\"}", "{\"comment\": \"We hope our responses have adequately addressed your concerns! If so, we kindly ask you to increase your score recommendation.\"}", "{\"comment\": \"Dear Reviewer,\\n\\nThank you for your feedback on how we can improve our paper. We provide additional results on improved baselines, discussions on the runtime differences between STRAP and a fine-tuning approach, and generalization to new environments and embodiment. \\n\\n**Additional real-world tasks**\\n\\nWe\\u2019ve added three real-world tasks (\\u201cpick and place the [carrot, pepper, chili]\\u201d) in three realistic kitchen environments ([table, sink, stove]) and collected 50 multi-task demonstrations in every scene. Object poses are randomized in a $20\\\\times20cm$ grid during data collection and evaluation. D_target is specific to each environment and contains 3 demonstrations of the downstream task. \\n\\nWe find STRAP to experience surprising generalization behavior from the 3 poses seen in D_target to the poses in the $20\\\\times20cm$ grid exposed during evaluation (Kitchen). The policy further shows recovery behavior, completing the task even when the initial grasp fails and alters the object pose [see the robot attempting to pick up the red pepper in the first video on our website]. As shown in [Table 2] STRAP even maintains its high performance, retrieving the relevant data for much larger offline datasets (Kitchen+DROID).\\n\\nMeanwhile, the large pose randomizations in D_prior and the evaluation are challenging for the baselines. Behavior cloning and fine-tuning (BC, FT) fit the target demonstrations failing to adapt to unseen object poses. The multi-task policy (MT) replays trajectories from the offline dataset instead of solving the prompted task most likely caused by an imbalanced training dataset. Increasing the dataset size amplifies these challenges by making it significantly harder to pre-train or learn a multi-task policy, leading to performance drops of $-6.09$% (FT) and $-26.3$% (MT).\\n\\n**Stronger baselines**\\n\\nWe extend our evaluations by including two additional baselines for all LIBERO-10 tasks and the real-world tasks introduced above.\\n\\n1) Pre-training + fine-tuning (FT) which represents a policy pre-trained on D_prior and fine-tuned on the few available demonstrations in D_target.\\n2) Multi-task policy (MT) trained on D_prior $\\\\cup$ D_target in contrast to only D_prior in our original submission.\\n\\nThe fine-tuning baseline is more competitive than standard behavior cloning but still falls short by $-6.4$% and $-25.7$% compared to STRAP on the LIBERO-10 and real-world tasks. The updated multi-task policy performs well on some tasks where we expect positive transfer, i.e., where environments and tasks in D_prior and D_target overlap, e.g., LIBERO-10 and LIBERO-90 both contain the Book-Caddy task. We\\u2019ve added the FT baseline and replaced MT trained only on D_prior with MT trained on D_prior $\\\\cup$ D_target in [Tables 1,2 & 3].\"}", "{\"comment\": \"The first time I gave it a relatively low score, it was largely because the website had no content during my review, so I thought the article was in an unfinished stage.\\n\\nAlthough STARP has a certain degree of improvement in multi-task generalization ability compared with baseline, it usually requires few-shot learning. I don't think there is a significant advantage over the work of the same period, but I recognize the STARP work's contribution to the robotics community. I will improve my score, but only to 5 points.\"}", "{\"comment\": \"Thank you for adressing my questions!\\nI've updated the ratings.\"}", "{\"comment\": \"Thank you for adressing my comments.\\nEven though the performance improvements seems promising, but the computation cost seems not to be practical.\\nIf the retrieval takes ~5min in DROID dataset, does it mean that we need 5 min for interence for each action prediction?\\nif so, I would not agree that this algorithm is applicable to the real-world problems.\"}", "{\"summary\": \"## Paper Review Summary\\n\\nThis work advocates for training policies dynamically during deployment, utilizing the encountered scenarios to improve model performance. Instead of relying on pre-trained policies to tackle new problems in a zero-shot fashion, the authors propose a non-parametric approach that retrieves relevant data and trains models directly at test time. The paper introduces SRTAP, a method built on pre-trained Vision-Language Models (VLM) and dynamic time wrapping, which combines sub-trajectories into a policy. The approach involves some training on a set similar in language to the test set, and has been demonstrated in both real-world and simulated environments.\\n\\n### Strengths:\\n1. **Innovative Approach**: The authors present a compelling intuition, demonstrating robustness in solving multitask generalization challenges.\\n2. **Efficient Data Usage**: The method shows improvement in the way data is leveraged for robotics tasks, particularly in sub-trajectory retrieval.\\n3. **Thorough Experiments**: The experiments are detailed and show promising results for sub-trajectory retrieval and policy creation.\\n\\n### Weaknesses:\\n1. **Project Incompleteness**: There is no accessible website or supplementary information, suggesting the project might still be unfinished.\\n2. **Visual Readability**: The images in the paper are difficult to interpret, potentially detracting from the clarity of the results.\\n3. **Writing Quality**: The paper's writing needs improvement, especially in terms of clarity and readability.\\n4. **Generalization**: Some imitation learning methods, such as [Sparse Diffusion Policy](https://forrest-110.github.io/sparse_diffusion_policy/), [HPT](https://liruiw.github.io/hpt/), [RDT-Robotics](https://rdt-robotics.github.io/rdt-robotics), and [Humanoid Manipulation](https://humanoid-manipulation.github.io/), appear to show more generalization in similar settings.\\n5. **Formatting**: The paper's formatting is problematic, with certain sections being hard to read, affecting the overall readability of the work.\\n\\nIn conclusion, while the proposed method shows strong intuition and detailed experimentation, there are concerns about project completeness, readability, and potential improvements in both writing and generalization when compared to existing work. \\n\\nI would like to change the rate after discussion, but at least you should finish the site you provide.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"3\", \"strengths\": \"1. **Innovative Approach**: The authors present a compelling intuition, demonstrating robustness in solving multitask generalization challenges.\\n2. **Efficient Data Usage**: The method shows improvement in the way data is leveraged for robotics tasks, particularly in sub-trajectory retrieval.\\n3. **Thorough Experiments**: The experiments are detailed and show promising results for sub-trajectory retrieval and policy creation.\", \"weaknesses\": \"1. **Project Incompleteness**: There is no accessible website or supplementary information, suggesting the project might still be unfinished.\\n2. **Visual Readability**: The images in the paper are difficult to interpret, potentially detracting from the clarity of the results.\\n3. **Writing Quality**: The paper's writing needs improvement, especially in terms of clarity and readability.\\n4. **Generalization**: Some imitation learning methods, such as [Sparse Diffusion Policy](https://forrest-110.github.io/sparse_diffusion_policy/), [HPT](https://liruiw.github.io/hpt/), [RDT-Robotics](https://rdt-robotics.github.io/rdt-robotics), and [Humanoid Manipulation](https://humanoid-manipulation.github.io/), appear to show more generalization in similar settings.\\n5. **Formatting**: The paper's formatting is problematic, with certain sections being hard to read, affecting the overall readability of the work.\", \"questions\": \"1. Would you please update the sites in the paper?\\n2. Could you please add some experiments with other imitation learning methods?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Ah, there must be some misunderstanding! As a reminder, STRAP only runs retrieval once to augment the initial training dataset. We then train a Transformer-based policy on the augmented dataset [Section 4.5]. Afterward, the policy can run inference at 15Hz on the real robot (i.e., action prediction once every \\\\~0.0667 seconds) [updated A.2]. The top of our website [[link](https://strapaper.github.io/strap.github.io/)] shows the real-time policy rollouts (no speedup). To summarize, STRAP does not run retrieval during action prediction but only a single time before training a policy.\\n\\nPlease let us know if you have further questions!\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"Thank you for the additional experiments and insights\", \"comment\": \"Thank you for the additional experiments and insights. From the additional baselines, it seems that subtrajectory retrieval outperforms multitask behavioral cloning and fine-tuning, given access to the same pool of data. Additional experiments also show that STRAP can retrieve semantically relevant subtrajectories from large datasets, and improves robustness in the few-shot setting. My concerns have been addressed and I have updated my rating.\"}", "{\"comment\": \"**Choice of hyperparameter $K$**\\n\\nWe extensively ablate STRAP for $K \\\\in (100, 200, 500, 1000, 2000, 4000)$ on all LIBERO-10 tasks. We find tuning $K$ to improve our reported success rates ($K=100$) on 8/10 tasks by an average of $7.5$%. The results suggest that the optimal value for $K$ is task-dependent with some tasks benefiting from retrieving less (4/10) and some from retrieving more (4/10) data. We hypothesize that the optimal $K$ depends on whether tasks benefit from broader less relevant or more refined data.\\nThank you for this suggestion! We update our results in [Tables 1 & 3] with the improved results. \\nYou can find additional details and visualizations in [Table 8] and [Figure 29] in the Appendix.\\n\\nThe choice of $K$ has no impact on the computational complexity of STRAP as we always compute all matches between D_target and D_prior before selecting the top K. This also means that the hyperparameter search on $K$ can be reduced to policy learning and evaluation by storing the matches and training on $K$ segments.\\n\\n\\n**Baselines ablations**\\n\\na) We compare non-parametric (frozen DINOv2, denoted as D-S) and parametric (training a VAE as in Behavior Retrieval, denoted as BR) embeddings in a state-based retrieval setting. [Table 1] shows BR outperforming DINOv2 embeddings by $+5$% on average across all LIBERO-10 tasks. Inspecting the individual success rates in [Tables 1 & 3] it becomes clear that the optimal embedding is highly task-dependent. However, we expect future representation learning techniques to close this gap. DINOv2 only encodes a single observation and, therefore, does not have a notion of dynamics and task semantics compared to BR.\\n\\nb) & c) We compare retrieving states, sub-trajectories, and full trajectory retrieval in [Table 1]. While state-based retrieval is based on cosine similarity, the variable length of trajectories requires S-DTW for sub-trajectory and full trajectory retrieval. We find retrieving sub-trajectories outperforms state-based retrieval by $+17.2%$ and full trajectory retrieval by $+4.2%$ on average across all LIBERO-10 tasks.\\n\\nTo summarize, the largest gains come from retrieving (sub-)trajectories enabled by S-DTW. While the choice of embedding is largely task-dependent, STRAP allows for using off-the-shelf models by encoding the dynamics and semantics of the trajectories in the sub-trajectory retrieval process instead of the embedding model.\\n\\n\\nWe hope these additional modifications have addressed your previous questions. Please don\\u2019t hesitate to let us know if you have any additional comments or questions.\"}", "{\"summary\": \"This paper focuses on the setting of generalizing a policy to an unseen task with few-shot demonstrations. Instead of deploying zero-shot, the paper proposes STRAP, training a model on task-relevant data augmented by retrieval. STRAP first retrieves sub-trajectories from a large pretraining dataset that are similar to the new task demonstrations, then combines them with the few-shot demos to train a policy. Results on sim and real environments show that STRAP outperforms other retrieval methods and pure behavioral cloning. Ablations show that STRAP is compatible with various vision encoders and justify each of its component.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"Sub-trajectory retrieval for the few-shot demo behavioral cloning setting is a well-motivated and novel idea.\", \"The method is clear and straightforward to implement.\", \"Results show that matching with DTW on vision foundation model features are robust to variations and capture task semantics.\", \"Real and simulated environments show that STRAP outperforms other retrieval methods and pure behavioral cloning.\"], \"weaknesses\": [\"STRAP requires few-shot demos and model training at test time for a new task.\", \"It would be good to see more sim and real environments for evaluations.\", \"It would be more convincing to see a behavioral cloning baseline that uses all available data.\"], \"questions\": [\"Baseline: how does STRAP compare to the pretrain-then-finetune setup? (Pretrain on the prior dataset, then fine-tune on the few-shot target demonstrations?)\", \"Baseline: how does STRAP compare to a multitask policy trained on all available data?\", \"Generalization: to what extent does STRAP generalize across environments or embodiments?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thank you for updating your score!\", \"comment\": \"**Requiring few-shot learning**\\n\\nDeploying a multi-task baseline without any additional demonstrations achieves zero success on unseen tasks ($0.0$\\\\% on 9/10 LIBERO-10 tasks). Collecting a small amount of demonstrations is therefore necessary to achieve any success at all. STRAP goes one step further and better uses the demonstration and available offline datasets to go beyond just success but to achieve robustness as demonstrated in our sim and real experiments.\\n\\n**Advantage over prior work**\\n\\nTo touch on the advantages of STRAP over *work of the same period*, we point out that STRAP outperforms previous retrieval methods BehaviorRetrieval by $+12.2$\\\\% and FlowRetrieval by $+12.5$\\\\% on average across all LIBERO-10 tasks. With tuned hyperparameter $K$ this gap widens to $+24.7$\\\\% and $+25.0$\\\\%, respectively.\\nIn contrast to these baselines, STRAP does not require training an embedding model and instead uses an off-the-shelf frozen vision foundation model making it much more scalable to larger offline datasets.\\n\\nIf this doesn\\u2019t fully address your question, we would greatly appreciate it if you could provide further clarification.\"}", "{\"summary\": \"The paper introduces STRAP, a novel method for trajectory retrieval to find similar sub-trajectory segments in large-scale datasets for efficient policy learning in few-shot settings. The method's key contribution lies in combining pretrained visual encoders with Dynamic Time Warping to encode sub-trajectories of variable lengths for improved retrieval. The proposed method is tested against several retrieval baselines and BC ones on LIBERO-10 simulation and real world pick and place tasks and achieves good performance.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The proposed method is well motivated and achieves strong results across multiple experiments, both in simulation and real-world settings\", \"The use of Dynamic Time Warping for sub-trajectory matching is novel and well-suited for the problem domain\", \"Comprehensive evaluation against recent retrieval baselines demonstrates the method's effectiveness\", \"Thorough ablation studies on different pretrained encoders provide valuable insights into architecture choices\", \"The paper is well written and includes several illustrative figures that enhance the text\"], \"weaknesses\": [\"The baseline comparison against multi-task policy appears weak, as it only uses pretrained weights without fine-tuning. This seems like an artificially weak baseline since fine-tuning is standard practice for all MT-policies.\", \"The paper's argument that retrieval is more efficient than expensive pretraining needs stronger empirical support, especially given that the robotics community regularly fine-tunes general policies for downstream tasks\", \"The computational cost of STRAP's retrieval process on large-scale datasets like Droid is not adequately addressed, raising questions about real-world scalability. Some more clarity is necessary here\", \"The choice of K for constructing D-retrieval lacks sufficient explanation and ablations. The paper should explore how different K values affect both retrieval quality and computational overhead and policy performance, as this parameter likely presents a trade-off between performance and efficiency. A discussion about the retrieved data quantity would provide valuable insights and strengthen then paper.\", \"Small number of tested tasks in real world setting and missing baselines of MT policy and finetuned MT policy\"], \"questions\": [\"Could STRAP be combined with fine-tuning of MT policies on the retrieved dataset to potentially achieve even better performance than domain-specific fine-tuning alone?\", \"How does STRAP's performance compare against standard fine-tuning approaches when controlling for the total amount of data used?\", \"What are the memory and computational requirements for deploying STRAP on very large trajectory datasets like Droid?\", \"Can you add the average performance for LIBERO-10 results to the main table?\", \"Can you provide a few more real world tasks with required baselines?\", \"Real world retrieveal is conducted with the same robot embodiment and gripper. How does STRAP Perform when retrieving similar data from other robot datasets like BridgeV2 that does not share the same robot and scenes?\", \"STRAP presents an novel and well designed approach to few-shot learning, that tackles several drawbacks of prior methods through sub-trajectory retrieval with dynamic time-warping. However, more comprehensive comparisons against fine-tuned baselines and clearer analysis of computational requirements would strengthen the paper's contributions. Thus, I recommend weak reject pending addressing the following concerns: (1) comparisons against fine-tuned baselines, (2) clearer analysis of computational requirements, and (3) better justification of parameter choices.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"None\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Reviewer,\\n\\nThank you for your feedback on how we can improve our paper. We provide additional results on three real-world tasks, improved baselines, an extensive ablation of hyperparameter $K$, and a discussion on our retrieval algorithm's computational and memory complexity.\\n\\n**Additional real-world tasks**\\n\\nWe\\u2019ve added three real-world tasks (\\u201cpick and place the [carrot, pepper, chili]\\u201d) in three realistic kitchen environments ([table, sink, stove]) and collected 50 multi-task demonstrations in every scene. Object poses are randomized in a $20\\\\times20cm$ grid during data collection and evaluation. D_target is specific to each environment and contains 3 demonstrations of the downstream task. \\n\\nWe find STRAP to experience surprising generalization behavior from the 3 poses seen in D_target to the poses in the $20\\\\times20cm$ grid exposed during evaluation (Kitchen). The policy further shows recovery behavior, completing the task even when the initial grasp fails and alters the object pose [cf. robot attempting to pick up the red pepper in the first video on our website]. As shown in [Table 2] STRAP even maintains its high performance, retrieving the relevant data for much larger offline datasets (Kitchen+DROID).\\n\\nMeanwhile, the large pose randomizations in D_prior and the evaluation are challenging for the baselines. Behavior cloning and fine-tuning (BC, FT) fit the target demonstrations failing to adapt to unseen object poses. The multi-task policy (MT) replays trajectories from the offline dataset instead of solving the prompted task most likely caused by an imbalanced training dataset. Increasing the dataset size amplifies these challenges by making it significantly harder to pre-train or learn a multi-task policy, leading to performance drops of $-6.09$% (FT) and $-26.3$% (MT).\\n\\n\\n\\n**Stronger baselines**\\n\\nWe extend our evaluations by including two additional baselines for all LIBERO-10 tasks and the real-world tasks introduced above.\\n\\n1) Pre-training + fine-tuning (FT) which represents a policy pre-trained on D_prior and fine-tuned on the few available demonstrations in D_target.\\n2) Multi-task policy (MT) trained on D_prior $\\\\cup$ D_target in contrast to only D_prior in our original submission.\\n\\nThe fine-tuning baseline is more competitive than standard behavior cloning but still falls short by $-6.4$% and $-25.7$% compared to STRAP on the LIBERO-10 and real-world tasks. The updated multi-task policy performs well on some tasks where we expect positive transfer, i.e., where environments and tasks in D_prior and D_target overlap, e.g., LIBERO-10 and LIBERO-90 both contain the Book-Caddy task. We\\u2019ve added the FT baseline and replaced MT trained only on D_prior with MT trained on D_prior $\\\\cup$ D_target in [Tables 1,2 & 3].\"}", "{\"summary\": \"In this paper, the authors propose a task-specific robot learning framework using pre-collected datasets. Unlike the many robot learning methods that train the generalist policy model with multi-task expert data, the proposed method (STRAP) train a task-specific policies, which can yields better performance on single task. When the few-shot target demo, in addition to prior dataset is given, STRAP filters the task-relevant data from prior data and use it with target demo to train the model. One of the key features of STRAP is that it retrieves the data measuring the similarity between sub-trajectories, rather than the whole trajectories. Also, it utilize subsequence dynamic time warping (S-DTW) to match between the data. As a result, the proposed method shows improved performance compared to the previous methods, generalist policy models, and specialist models that only use the target data.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"To deal with potentially variable length during retrieval, STRAP use dynamic time warping (DTW) to match the sub-sequences\", \"STRAP shows improved performance compared to the prior framework (Behavior Retrieval, Du et al., 2023) which retrieves single state-action pairs using VAE.\"], \"weaknesses\": [\"The idea of using only data relevant to the target task, rather than learning a generalist policy through multi-task data, is interesting. However, retrieving new data from prior dataset and training a policy each time a new scene is encountered is highly computationally costly.\", \"Comparing the entire prior data with the target data one-to-one to measure similarity is not scalable with the dataset size. Moreover, since this retrieval process requires computationally intensive neural network operations, such as DINO, it raises questions about whether this process can be performed at test time. In particular, there is no mention of how to handle an increase in offline dataset size, nor are there any discussions about limitations in this regard.\", \"There is no discussion about computational cost.\", \"STRAP uses a top-k retrieval dataset. Increasing this k could bring in more data but might reduce relevance, whereas a smaller k would provide more refined data but with a smaller amount. However, there is a lack of analysis on how changing this k value affects performance.\"], \"questions\": [\"Compared to the prior framework (Behavior Retrieval, Du et al., 2023), STRAP seems to have three main differences in retrieval system. (a) use non-parametric retrieve vs. VAE (b) use sub-trajectory wise retrieve vs. single state-action pairs and (c) use DTW. Among these, what gives the most / least performance gains?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"(a) This paper propose a novel few-shot imitation learning approach, STRAP, based on sub-trajectory retrieval. In simulated and real-world experiments, STRAP outperformed existing retrieval algorithms and multi-task learning methods, demonstrating its scalability to large offline datasets and its ability to learn robust control policies from limited real-world demonstrations.\\n\\n(b) Strengths: \\n- Novelty: Introducing sub-trajectory retrieval is a novel contribution.\", \"effectiveness\": \"STRAP demonstrates strong performance improvements over baseline methods in both simulated and real-world experiments.\", \"clarity\": \"The paper is generally well-written and includes illustrative figures that enhance understanding.\\n\\n(c) Weakness: \\n- Computational Cost: While the authors responded to the computational cost of retrieval (Sec A.5), it still remains a concern to some extent.\\n- It'd make the paper stronger to have a deeper investigation of generalization.\\n\\n(d) My decision is to accept the paper for the novelty and strong performance.\", \"additional_comments_on_reviewer_discussion\": \"The reviewers generally agree that STRAP is well-motivated and novel for few-shot imitation learning. They generally appreciate the clear presentation, thorough experiments, and promising results in both simulated and real-world settings.\\n\\nThe reviewers initially raised concerns regarding the strength of multi-task policy baselines, choice of hyperparameter K; questions regarding memory and computational requirements. \\n\\nThe authors responded to the reviewers' feedback and addressed most concerns in the rebuttal. They have added additional new real-world tasks to their evaluation, demonstrating STRAP's robustness and generalization capabilities; included two additional baselines, pre-training with fine-tuning (FT) and a multi-task policy trained on all available data (MT), strengthening the comparisons; addressed questions about scalability; conducted ablations of the hyperparameter K.\\n\\nThese efforts led to improvement of the review scores, moving the paper from borderline (positively) to accept.\"}", "{\"title\": \"Thank you for your feedback!\", \"comment\": \"We thank all reviewers for their constructive feedback and for acknowledging the novelty of Dynamic Time Warping for sub-trajectory retrieval [Ndo3, JdfR, RToH], pointing out its improvements over recent methods [Ndo3, JdfR, RToH], and describing our intuition as well motivation and compelling [Ndo3, JdfR, HWGB].\\n\\n\\nWe\\u2019ve added three new real-world tasks (\\u201cpick and place the [carrot, pepper, chili]\\u201d) to our evaluation (Kitchen) and improved the baselines (in sim and real) by adding pre-training + fine-tuning (FT) and a multi-task policy trained on all available data (MT).\\n\\nKitchen\\n| | Table | Sink | Stove |\\n|----------------|---------|--------------|--------------|\\n| BC | 12.50 | 10.00 | 14.28 |\\n| FT | 20.00 | 27.27 | 30.43 |\\n| MT | 4.34 | 31.57 | 45.00 |\\n| STRAP | **36.36** | **61.36** | **57.12** |\\n\\nTo investigate scalability to larger datasets, we construct an additional offline dataset D_prior consisting of 5000 demonstrations from the DROID dataset and 50 demonstrations collected in the same environment as D_target (Kitchen+DROID).\\n\\nKitchen+DROID\\n| | Table | Sink | Stove |\\n|----------------|---------|--------------|--------------|\\n| BC | 12.50 | 10.00 | 14.28 |\\n| FT | 28.00 | 8.69 | 22.72 |\\n| MT | 2.00 | 0.00 | 0.00 |\\n| STRAP | **56.81** | **63.04** | **45.45** |\\n\\n\\nWe address your feedback in the comment section below. You can find extensive ablations of hyperparameter $K$ [A.3] and a discussion on the computational complexity of STRAP [A.5] in the appendix. We\\u2019ve updated the manuscript for better readability and included additional real-world experiments and improved baselines.\"}" ] }
4UxXe3JZta
HRVMamba: High-Resolution Visual State Space Model for Dense Prediction
[ "Hao Zhang", "Yongqiang Ma", "Wenqi Shao", "Ping Luo", "Nanning Zheng", "Kaipeng Zhang" ]
Recently, State Space Models (SSMs) with efficient hardware-aware designs, \ie, Mamba, have demonstrated significant potential in computer vision tasks due to their linear computational complexity with respect to token length and their global receptive field. However, Mamba's performance on dense prediction tasks, including human pose estimation and semantic segmentation, has been constrained by three key challenges: insufficient inductive bias, long-range forgetting, and low-resolution output representation. To address these challenges, we introduce the Dynamic Visual State Space (DVSS) block, which utilizes multi-scale convolutional kernels to extract local features across different scales and enhance inductive bias, and employs deformable convolution to mitigate the long-range forgetting problem while enabling adaptive spatial aggregation based on input and task-specific information. By leveraging the multi-resolution parallel design proposed in HRNet, we introduce High-Resolution Visual State Space Model (HRVMamba) based on the DVSS block, which preserves high-resolution representations throughout the entire process while promoting effective multi-scale feature learning. Extensive experiments highlight HRVMamba's impressive performance on dense prediction tasks, achieving competitive results against existing benchmark models without bells and whistles. We will make the source code publicly accessible.
[ "Mamba", "Dense Prediction", "Human pose estimation", "Semantic segmentation" ]
https://openreview.net/pdf?id=4UxXe3JZta
https://openreview.net/forum?id=4UxXe3JZta
ICLR.cc/2025/Conference
2025
{ "note_id": [ "pCQD2fgIo1", "myDZwnZtlN", "cziHVK8lSG", "QVofdfbD7T", "ADDepyKedS" ], "note_type": [ "official_review", "official_review", "official_review", "comment", "official_review" ], "note_created": [ 1730027050425, 1730710521632, 1730860344637, 1736152831114, 1730694749153 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission3460/Reviewer_YPDX" ], [ "ICLR.cc/2025/Conference/Submission3460/Reviewer_YccU" ], [ "ICLR.cc/2025/Conference/Submission3460/Reviewer_Y5sm" ], [ "ICLR.cc/2025/Conference/Submission3460/Authors" ], [ "ICLR.cc/2025/Conference/Submission3460/Reviewer_M2XD" ] ], "structured_content_str": [ "{\"summary\": \"This paper summarizes the current challenges encountered when applying vision mamba for dense prediction tasks, including insufficient inductive bias, long-range forgetting, and low-resolution output representation. Subsequently, the authors propose corresponding solutions, including multi-scale convolution kernels, deformable convolution, and the HRNet framework to alleviate these issues.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"This paper first introduces Vision Mamba to a multi-resolution branch architecture. Additionally, several techniques are introduced to alleviate the limitations of Vision Mamba, including deformable convolution and multi-kernel size convolution. These techniques provide improvements over the vanilla Vision Mamba baseline. The proposed methods are easy to follow.\", \"weaknesses\": \"1. This paper feels more like a technical report instead of an academic paper. The challenges of Vision Mamba models are obtained from previous works, and the solutions are also based on current techniques. Besides, some observations and methods are not properly cited. For example, the problem where multi-way scanning approach disrupts 2D spatial dependencies has been proposed in MambaIR[1], and the solution of Multi-scale Depthwise block has also been introduced in SMT[2]. Given these considerations, this paper lacks introducing any new insights or techniques to the community.\\n\\n2. The experiments are not valid enough. For example, for the semantic segmentation, only results on Cityscapes and PASCAL Context are reported. The results on the widely used ADE20K are missing. The authors may consider reporting results with the same framework like Uppernet on ADE20K and compare with publicly available results like VMamba. Complementing these results and comparing with publicly available results will make this paper more solid.\\n\\n3. Some experimental data lack further explanation. In Table 7, the authors report the performance on the COCO val set. As the COCO dataset has many benchmarks, this \\\"val\\\" set is ambiguous. If it refers to the pose estimation, the best results here are 74.2, which is not coherent with the results in Table 3. Additional explanation for the setting differences is needed. Besides, there are some very weird data such as the one in L412. The authors mentioned the poor performance of LocalVMamba on the PASCAL-Context dataset, but no further explanation for the reason is provided.\\n\\n4. The detailed efficiency comparisons are missing, including the inference speed and memory cost on different datasets.\\n\\n5. Some minor errors: In L213, Fig.3 refers to Fig.2 by mistake. Besides, the contents from L445 to L450 are not appropriate for the ablation section.\\n\\n6. This work does not give any discussion about the limitations.\", \"reference\": \"[1]. Guo, Hang, et al. \\\"MambaIR: A Simple Baseline for Image Restoration with State-Space Model.\\\" arXiv e-prints (2024): arXiv-2402.\\n\\n[2]. Lin, Weifeng, et al. \\\"Scale-aware modulation meet transformer.\\\" Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023.\", \"questions\": \"Why does the performance of HRFormer-B on the segmentation datasets significantly lag behind the reported results? For example, HRFormer-B + OCR achieves a mIoU of 82.6 on Cityscapes and 58.5 on PASCAL-Context datasets, respectively. However, the performance drops to 77.3 and 42.6 in Table 5, Lines 349-350.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper introduces HRVMamba, a high-resolution visual state space model tailored for dense prediction tasks. It builds on the Mamba framework, a hardware-efficient state space model (SSM) known for linear computational complexity. The authors highlight limitations in previous visual Mamba models\\u2014namely, insufficient inductive bias, long-range forgetting, and low-resolution outputs. To overcome these, HRVMamba incorporates the Dynamic Visual State Space (DVSS) block, combining multi-scale convolutional kernels and deformable convolution for enhanced local and long-range feature extraction. The HRVMamba model employs a multi-resolution parallel structure inspired by HRNet, preserving high-resolution representations and facilitating multi-scale feature learning.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"Empirical results indicate that HRVMamba outperforms contemporary CNNs, ViTs, and SSMs on benchmarks, delivering competitive performance with fewer computational resources.\", \"The figures in the paper are clean and aesthetically pleasing, which enhances readability.\"], \"weaknesses\": [\"Limited Novelty: The HRVMamba model mainly combines existing methods, including the VSS block, DWConv, DCN, and the HRNet architecture. This integration-based approach may not meet ICLR\\u2019s high standards for innovation.\", \"Concern for `Limitation 1`: While the paper addresses the lack of 2D inductive bias in previous visual Mamba models, it raises concerns about whether introducing such bias could restrict the **scaling ability** of HRVMamba. Vision Transformers (ViTs) have demonstrated that reduced inductive bias can facilitate better scaling, so incorporating strong inductive biases might limit HRVMamba's scalability and performance on larger-scale models.\", \"Concern for `Limitation 2`: The paper uses Deformable Convolutions (DCN) to mitigate the long-range forgetting issue observed in previous visual Mamba models. However, there is a concern about whether DCN can effectively address this problem as the sequence length scales up. The efficacy of DCN for maintaining high-level feature relationships over significantly longer sequences remains uncertain, raising questions about its robustness as a scalable solution for long-range dependencies.\", \"The paper references preprints and arXiv versions of significant works, such as Mamba (COLM), Vision Mamba (ICML), and VMamba (NeurIPS). The authors should update these citations to their final published versions to reflect the current state of the literature.\"], \"questions\": \"Please refer to the weakness part.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper proposes HRVMamba, a High-Resolution Visual State Space Model designed for dense prediction tasks, such as human pose estimation and semantic segmentation. The paper addresses limitations in existing Mamba-based models, particularly Mamba\\u2019s low-resolution output and challenges in retaining long-range dependencies. To overcome these issues, the authors introduce the Dynamic Visual State Space (DVSS) block, which leverages multi-scale and deformable convolutions to improve inductive bias and mitigate long-range forgetting. By integrating these innovations within a high-resolution, multi-resolution parallel structure, HRVMamba achieves competitive results across dense prediction benchmarks compared to CNN, ViT, and SSM models.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"HRVMamba demonstrates competitive or superior performance on COCO, Cityscapes, and PASCAL-Context benchmarks, often with fewer parameters and reduced computational load compared to similar models.\", \"weaknesses\": \"Limited Novelty: In my view, this paper incorporates techniques from CNN networks, such as DCNv4 and multi-resolution structures (from FPN and HRNet), into the Mamba block to enhance network performance. I am somewhat skeptical about whether such an innovation alone is sufficient for a publication at ICLR.\", \"questions\": \"See the weakness above\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\", \"comment\": \"We decided to withdraw the paper and improve it.\"}", "{\"summary\": \"The paper introduces HRVMamba, a High-Resolution Visual State Space Model designed for dense prediction tasks such as human pose estimation and semantic segmentation. HRVMamba addresses the limitations of previous Mamba models by incorporating a Dynamic Visual State Space (DVSS) block, which uses multi-scale convolutional kernels to enhance inductive bias and deformable convolutions to mitigate long-range forgetting. The model is based on a multi-resolution parallel design, preserving high-resolution representations throughout the network to facilitate effective multi-scale feature learning. Extensive experiments demonstrate HRVMamba's competitive performance against existing CNN, ViT, and SSM benchmark models on various dense prediction tasks.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper presents the Dynamic Visual State Space (DVSS) block, which combines multi-scale convolutional kernels and deformable convolutions.\\n2. This paper proposes the High-Resolution Visual State Space Model (HRVMamba) based on the DVSS block and the architecture of HRNet.\\n3. The proposed HRVMamba obtains improvements compared to previous approaches on several dense prediction masks\", \"weaknesses\": \"1. This paper's novelty is limited. It incorporates the VMamba block[1] into the HRNet architecture[2], including DCNv4, making it lack sufficient novelty and insights. In addition, similar ideas have been explored in HRFormer[3,4]. The novelty of this paper is below the threshold for ICLR publication.\\n2. The motivations for using DCNv4 and DVSS block in this paper are unclear. For example, this paper lacks a specific analysis of the long-range forgetting problem of Vmamba in vision and why DCNv4 can solve such a long-range forgetting problem of mamba. Whether from the theoretical or experimental analysis perspective, the authors need to provide exact evidence to present.\\n3. Lack of comparisons with recent vision mamba works, such as MambaVision[5].\\n4. In Fig.1, why are there neat blocks in the activation map? How can this be explained? Is it related to the scans in different directions of VMamba? Image activation is usually continuous, and I'm confused about it.\\n5. How do the feature maps of different resolutions fuse, downsample, or upsample?\\n6. How about the inference latency of the proposed HRVMamba, which includes VMamba blocks, multi-scale depthwise convolution blocks, and DCNv4 blocks?\\n7. In Tab.7, adding a 3x3 convolution has no effect. However, adding larger depth-wise convolutions, such as 5x5, 7x7, or 9x9, improves a little (0.3 AP), but this also introduces many additional parameters. It's unclear here whether the effect comes from extra parameters, larger convolutions, larger receptive fields, or multi-scale convolutions.\\n8. The performances of baseline methods (such as HRFormer) on Cityscapes and PASCAL Context are too low, which are far inconsistent with the original paper[3], for example, HRFormer-B obtains 82.6 mIoU (Cityscapes) and 58.5 mIoU (PASCAL Context) while achieving 77.3 mIoU (Cityscapes) and 42.6 mIoU (PASCAL Context) in this paper. I think a fair comparison is very important. However, the results of the current comparison methods are obviously much lower than those of the original methods. \\n\\nReferences\\\\\\n[1] Liu et al. VMamba: Visual State Space Model. NeurIPS 2024.\\\\\\n[2] Wang et al. Deep High-Resolution Representation Learning for Visual Recognition. TPAMI 2020.\\\\\\n[3] Yuan et al. HRFormer: High-Resolution Transformer for Dense Prediction. NeurIPS 2021.\\\\\\n[4] Gu et al. Multi-Scale High-Resolution Vision Transformer for Semantic Segmentation. CVPR 2022.\\\\\\n[5] Hatamizadeh et al. MambaVision: A Hybrid Mamba-Transformer Vision Backbone. NeurIPS 2024.\", \"questions\": \"1. The proposed HRVMamba add a multi-scale DW block and I'm concerned how about the performance of dropping the FFN block.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N/A\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}" ] }
4UXIGATUTj
Forecasting Whole-Brain Neural Activity from Volumetric Video
[ "Alexander Immer", "Jan-Matthis Lueckmann", "Alex Bo-Yuan Chen", "Peter H. Li", "Mariela D Petkova", "Nirmala A Iyer", "Aparna Dev", "Gudrun Ihrke", "Woohyun Park", "Alyson Petruncio", "Aubrey Weigel", "Wyatt Korff", "Florian Engert", "Jeff Lichtman", "Misha Ahrens", "Viren Jain", "Michal Januszewski" ]
Large-scale neuronal activity recordings with fluorescent calcium indicators are increasingly common, yielding high-resolution 2D or 3D videos. Traditional analysis pipelines reduce this data to 1D traces by segmenting regions of interest, leading to inevitable information loss. Inspired by the success of deep learning on minimally processed data in other domains, we investigate the potential of forecasting neuronal activity directly from volumetric videos. To capture long-range dependencies in high-resolution volumetric whole-brain recordings, we design a model with large receptive fields, which allow it to integrate information from distant regions within the brain. We explore effects of pre-training and perform extensive model selection, analyzing spatio-temporal trade-offs for generating accurate forecasts. Our model outperforms trace-based forecasting approaches on ZAPBench, a recently proposed benchmark on whole-brain activity prediction in zebrafish, demonstrating the advantages of preserving the spatial structure of neuronal activity.
[ "neuroscience", "forecasting", "video", "lightsheet microscopy", "zebrafish", "calcium imaging", "neuron activity" ]
Reject
https://openreview.net/pdf?id=4UXIGATUTj
https://openreview.net/forum?id=4UXIGATUTj
ICLR.cc/2025/Conference
2025
{ "note_id": [ "udaAlJyWeZ", "omNvvOIs4G", "acaBSrb1r7", "YHszlM3EDb", "WwWTBUVTYp", "WPYE8opfCk", "WMaAy5oAFz", "W6MogSaSQU", "VtJBlOzaZV", "V1fyashiCe", "RyVNg7gmQ8", "P7gdm543o6", "GtjhM7Tpcr", "Fyb7clPqwM", "DviNxNDLoY", "BF1AuZOTf5", "88Bk2igRWV", "6jfrSMjDPY", "5FYDsobT9S" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "meta_review", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732136282786, 1732136092319, 1732553140976, 1733175620541, 1732748541045, 1729736097813, 1734869908197, 1730423308427, 1730627547979, 1733173874906, 1732470711600, 1732623240345, 1732553205057, 1737523662526, 1732135992612, 1732136355828, 1732748507181, 1732427556327, 1732709538970 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission4792/Authors" ], [ "ICLR.cc/2025/Conference/Submission4792/Authors" ], [ "ICLR.cc/2025/Conference/Submission4792/Authors" ], [ "ICLR.cc/2025/Conference/Submission4792/Reviewer_5NDV" ], [ "ICLR.cc/2025/Conference/Submission4792/Authors" ], [ "ICLR.cc/2025/Conference/Submission4792/Reviewer_1oJD" ], [ "ICLR.cc/2025/Conference/Submission4792/Area_Chair_F8j3" ], [ "ICLR.cc/2025/Conference/Submission4792/Reviewer_5NDV" ], [ "ICLR.cc/2025/Conference/Submission4792/Reviewer_NKSZ" ], [ "ICLR.cc/2025/Conference/Submission4792/Authors" ], [ "ICLR.cc/2025/Conference/Submission4792/Reviewer_NKSZ" ], [ "ICLR.cc/2025/Conference/Submission4792/Reviewer_1oJD" ], [ "ICLR.cc/2025/Conference/Submission4792/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission4792/Authors" ], [ "ICLR.cc/2025/Conference/Submission4792/Authors" ], [ "ICLR.cc/2025/Conference/Submission4792/Authors" ], [ "ICLR.cc/2025/Conference/Submission4792/Reviewer_1oJD" ], [ "ICLR.cc/2025/Conference/Submission4792/Reviewer_5NDV" ] ], "structured_content_str": [ "{\"comment\": \"Thank you for the review.\\n\\n> It does not offer additional insights or novel perspectives to advance the field of artificial intelligence. Therefore, this manuscript would be more appropriately submitted to conferences or journals focused on neuroscience or medical image processing, as it does not align with the thematic scope of ICLR.\\n\\nWe respectfully disagree with this statement and refer the reviewer to the ICLR 2025 call for papers that lists applications to neuroscience as one of the relevant topics. Accordingly, we selected \\u201cApplications to neuroscience & cognitive science\\u201d as the primary area for this submission, as can be seen on top. We further emphasize that our work does indeed advance the field of AI by extending existing methods to a new, complex, and data-rich application domain, as well as by technical advances needed to scale them to volumetric videos. \\n\\n> The authors fail to elucidate the significance of this study anywhere within the main text, which raises questions about the necessity of forecasting whole-brain neuronal activity. The authors are encouraged to provide additional context in the abstract or introduction section.\\n\\nThank you for this suggestion. We extended the introduction section of the paper to clarify the relevance of brain activity forecasting.\\n\\n> The paper presents a limited comparison with only one baseline, namely the \\\"trace-based model,\\\" as shown in Figure 6. It raises the question whether ZAPBench, as a benchmark, evaluated only this single model type. The authors are encouraged to include additional baselines for comparison to substantiate the superiority of the proposed method.\\n\\nWe compare to the best trace-based model on ZAPBench (c.f. line 383 in the revised manuscript), selected from a set of four state-of-the-art time series models. Also note that we are the first to propose a video-based model for such data and therefore unfortunately no existing video baselines to compare against exist. It is highly non-trivial to scale existing video-based models to the volumetric case while maintaining reasonable computational performance.\\n\\n> On line 308, the authors refer to a \\\"segmentation mask,\\\" yet on line 68, they claim that their method can directly process 4D data. The authors are requested to clarify the apparent contradiction in their narrative.\\n\\nOur method performs video-to-video forecasting of neural recordings. However, to make a fair comparison and quantify the benefit of our method, we compared it to trace-based models. We do this by applying the segmentation mask to the voxel-level video forecast (i.e., model output).\\n\\n\\n> In the fourth contribution (line 108), the authors state that their proposed method is the \\\"only approach that consistently benefits from multivariate information.\\\" However, I did not encounter any experimental justification related to multivariate information in the main text. If such experiments were conducted, please direct me to the relevant sections within the paper.\\n\\nThe best time series-based method on ZAPBench is a univariate model, which we also compare to. The multivariate time-series methods did not show a performance improvement. However, the video-based model performs best and relies on multivariate information (the model processes information from multiple cells simultaneously). Sec. 3.2 and Figure 5 in the paper clearly show and quantify the benefit of increasing the receptive field of the UNet, which corresponds to utilizing more multivariate information. Furthermore, based on a suggestion by reviewer 5NDV, we now also include an ablation evaluating the importance of the information contained outside of the cell masks. This additional experiment reveals that unsegmented voxels do not have a significant impact on the video model's performance.\\n\\n> The description of temporal dimension processing on line 200 is unclear. I would like to confirm whether the authors' approach involves merging the temporal dimension with the batch dimension, such as transforming the data shape as follows: (batch, 2048, 1152, 72, T) --> (batch*T, 2048, 1152, 72). If not, please provide clarification on their methodology.\\n\\nWe do not merge the batch and temporal dimension but, as pointed out in lines 215 and following, use the temporal dimension as channels/features. Since we have a grayscale image, we would normally have a single channel at the input level. Here, we instead use the frames as input channels, which helps significantly with model scalability.\\n\\nWe respectfully ask the reviewer to reconsider their evaluation in light of these clarifications.\"}", "{\"comment\": \"Thank you for the review and for acknowledging the novelty of directly modeling volumetric video for forecasting brain activity and the extensive experimental evidence we presented.\\n\\n> \\u201cit is unclear whether or not this is due to the additional information that exists in the raw data and that the video-based model is indeed taking advantage of such information.\\u201d... \\u201cI suggest applying the segmentation masks to the video, i.e. set regions outside of the identified cells as background, and train the video-based model on the masked videos.\\u201d\\n\\nThank you for this suggestion. We have now performed this experiment and added the results in the updated revision of the paper, finding that: \\u201cThe grand average test MAE for that model (0.0266\\u00b10.0046) was not significantly different from that of the video model processing the complete volume (0.0267\\u00b10.0042). This indicates that the unsegmented voxels are unlikely to contain information that could improve forecasts and that any gains relative to the trace-based models can be attributed to the utilization of the spatial distribution of the underlying calcium signals within and across the segmented cells.\\u201d\\n\\n\\n> \\u201cMAE might not be the most intuitive metric for getting a sense of performance\\u201d \\u2026 \\u201cI suggest the authors include metrics that are commonly used in neural response prediction\\u201d.\\n\\nThank you for the suggested metrics. Unfortunately, we do not have sufficient repetitions within the trials to reliably estimate the variances required by these metrics. Nonetheless, we updated the paper to refer to the mentioned prior work as potential future evaluation metrics for cases where more trials are available. \\n\\nWhile an improved MAE on the test set clearly indicates a better generalizing model, we agree that the absolute numbers are hard to interpret. To provide an intuition of what an MAE difference of e.g. 0.005 as reported in our results can look like, we included a new supplementary figure (Figure 10 in the revised manuscript).\\n\\n\\n> \\u201cCan the authors share the time it took to train the final (best) video-based and trace-based models?\\u201d\\n\\nWe have added this information to Appendix A.4. We acknowledge the significantly increased computational cost of this approach, but also note that the reduced extent to which the raw data needs to be preprocessed when forecasting directly in the video domain.\\n\\n> Minor questions\\n\\n- Minor Q1: The frame rate of the video is roughly 1 Hz.\\n- Minor Q2: We use $C=4$ and $C=256$ as two extreme cases to assess the relevance of temporal context and quantify it in Figure 5. Indeed, it shows the difficulty of predicting $H=32$ frames from only 4 conditioning frames. However, generative natural video models can generate many more frames starting from only a few and we therefore argue that it does make sense to at least assess performance in this regime.\\n- Minor Q3: We indeed optimize the trace-based MAE for the video models so that a fair comparison with the trace-based models can be made. If we optimize the voxel-wise MAE, the models perform relatively worse when evaluated with trace-MAE, as it corresponds to a different weighting of neurons by their size.\\n- Minor Q4: The hyperparameters (learning rate, optimizer, weight decay, dropout) were optimized on the validation set using small grids of 3-4 values each, starting from commonly used values.\"}", "{\"comment\": \"Thank you for your answer. We address your remaining questions below.\\n\\n> I tentatively agree with the authors' perspective about the thematic scope of ICLR. However, I request that the authors provide references to previously published works on similar topics in ICLR as supporting evidence.\", \"we_respectfully_suggest_that_the_scope_of_the_conference_as_outlined_in_https\": \"//iclr.cc/Conferences/2025/CallForPapers is not a matter of _our perspective_ here.\\nPlease note that applications to neuroscience have been within ICLR\\u2019s thematic scope since its inception, see e.g., the Call for Papers of ICLR 2013: https://iclr.cc/archive/2013/call-for-papers.htm\\n\\nAnalysis of brain activity is clearly within the realm of neuroscience, and there are decades of prior work based on various modalities (ephys, fMRI, calcium recordings, etc). Extending this line of inquiry, *high-resolution whole-brain* activity analysis is a novel topic enabled by recent experimental advances, and unfortunately but correspondingly, related prior work is sparse. For a sample ICLR paper covering topics similar to ours, we refer the reviewer to https://openreview.net/forum?id=CJzi3dRlJE-, which provides a model of brain activity in a much simpler model organism (_C. elegans_).\\n\\n> The motivation for this study is unclear. Although the authors discussed the significance of this study in the introduction, they did not address the necessity of forecasting whole-brain neuronal activity. My primary concern is understanding the role of forecasting whole-brain neuronal activity in downstream tasks within neuroscience. If the authors' work is merely predicting subsequent frames of brain imaging data based on earlier frames, its contribution appears limited, as numerous similar studies already exist, such as video frame prediction in computer vision and time-series forecasting in machine learning.\\n\\nThere is indeed extensive prior work in natural video prediction and time-series forecasting in various contexts. However, we argue that the nature of the application domain and the input data matter a lot, present different problems, and demand specialized solutions. In the video domain, our data is higher dimensional (3d+t for brain calcium movies vs 2d+t for natural videos) and generated by a small but complex system (larval zebrafish brain). Natural video models need to deal with lighting, reflections, perspective, occlusions, and movement. This is completely different in our case, where none of these are relevant and we are instead concerned with the spatiotemporal dependencies between the units of a complex network (neurons). In the paper we show that using the video representation leads to more accurate models than using time series.\\n\\nWe view our work as fundamental research in advancing our ability to model and understand a complex dynamical system that is the brain. This can be seen as related to the area of brain simulation (see e.g.: [doi: 10.1016/j.cell.2015.09.029](https://pubmed.ncbi.nlm.nih.gov/26451489/)), which focuses on building detailed models of specific brain regions or circuits, incorporating knowledge of neuronal connectivity and biophysical properties. Our approach is complementary: it is more data-driven, able to model details which are not mechanistically understood, and rigorous in evaluation, in that model predictions are directly compared to recordings of a real brain.\\n\\nWhile at this stage of the research downstream applications are not a direct motivation, we sketch out here some potential directions for which our present work could be a foundation:\\n- **Experimental design optimization**: Real recording sessions are in practice limited to about 2h, so the ability to predict responses to stimuli and perturbations in silico could be used to optimize what gets tested in the real world.\\n- **Brain-computer interfaces (BCIs)**: Being able to precisely forecast neural activity could make it possible to build more efficient BCIs (lower error rates, lower latency).\\n- **Anomaly detection**: Identifying deviations from typical patterns of activity could serve as biomarkers for neurological conditions and, in the future, for prediction of individual responses to therapy.\\n\\n> Although the authors have added baselines, they remain insufficient. Furthermore, the study lacks a broader range of evaluation metrics; I note that only MAE is used throughout the manuscript. I would prefer to see evidence that the proposed large-scale training approach can improve the performance of certain meaningful downstream tasks.\\n\\nPlease see our response above regarding downstream tasks. Regarding metrics, we follow the setup of ZAPBench itself, which defined the dataset and the metrics that should be used to evaluate forecasting models built upon it.\"}", "{\"comment\": \"I thank the authors for the additional experiment on original segmented cells vs shuffled cells, I believe this is a nice test to isolate the performance gain due to the spatial organization in the volumetric video data. Also thank you for adding correlation as a metric.\\n\\nI have updated my score accordingly.\"}", "{\"comment\": \"Thank you for raising the presentation score after our updates to the paper. We just wanted to note that as a result of discussion with reviewer 5NDV we have now extended the metrics to cover correlation scores, which paint a broadly similar picture to that presented by MAE. Measured this way, the video approach outperforms trace-based methods even in the long-context regime ($C=256$). We believe that additional non-video baselines are out of scope of this work and already sufficiently covered by ZAPBench itself.\"}", "{\"summary\": \"This manuscript proposes the utilization of deep learning techniques for the prediction of neuronal activity recordings with fluorescent calcium, asserting superior performance over previous baselines. A series of ablation studies have revealed practical insights into model pre-training and hyperparameter tuning.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1.This paper employs a deep learning approach based on U-Net to directly process 4D neural activity recordings, circumventing complex preprocessing methods that may introduce performance degradation.\\n\\n2.A series of ablation studies have revealed practical insights into model pre-training and hyperparameter tuning.\\n\\nAfter reading the authors' rebuttal and the comments from Reviewer 5NDV, I agree that neural response prediction/forecasting directly from volumetric video is novel. This approach allows for minimal preprocessing of the data, which can be advantageous for deep learning-based methods.\", \"weaknesses\": \"1.The authors fail to elucidate the significance of this study anywhere within the main text, which raises questions about the necessity of forecasting whole-brain neuronal activity. The authors are encouraged to provide additional context in the abstract or introduction section.\\n\\n2.This paper primarily utilizes the initial frames of neuronal activity recordings with fluorescent calcium indicators to predict subsequent frames, representing an application of U-Net in a specific domain. It does not offer additional insights or novel perspectives to advance the field of artificial intelligence. Therefore, this manuscript would be more appropriately submitted to conferences or journals focused on neuroscience or medical image processing, as it does not align with the thematic scope of ICLR.\\n\\n3.The paper presents a limited comparison with only one baseline, namely the \\\"trace-based model,\\\" as shown in Figure 6. It raises the question whether ZAPBench, as a benchmark, evaluated only this single model type. The authors are encouraged to include additional baselines for comparison to substantiate the superiority of the proposed method.\", \"questions\": \"1.On line 308, the authors refer to a \\\"segmentation mask,\\\" yet on line 68, they claim that their method can directly process 4D data. The authors are requested to clarify the apparent contradiction in their narrative.\\n\\n2.In the fourth contribution (line 108), the authors state that their proposed method is the \\\"only approach that consistently benefits from multivariate information.\\\" However, I did not encounter any experimental justification related to multivariate information in the main text. If such experiments were conducted, please direct me to the relevant sections within the paper.\\n\\n\\n3.The description of temporal dimension processing on line 200 is unclear. I would like to confirm whether the authors' approach involves merging the temporal dimension with the batch dimension, such as transforming the data shape as follows: (batch, 2048, 1152, 72, T) --> (batch*T, 2048, 1152, 72). If not, please provide clarification on their methodology.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"The paper proposes a UNet-based approach for forecasting whole-brain neuronal activity using volumetric video data, achieving strong results on the ZAPBench benchmark. While the focus on leveraging raw volumetric video data is promising, the paper would benefit from engaging more with the broader field of 4D spatio-temporal modeling, as these approaches are commonly used in areas like fMRI analysis and climate forecasting. The authors do state in the rebuttal that there exist no video-based models for this dataset. However, since this work is strongly motivating its methodological innovation, comparison against an array of 4D spatio-temporal models needs to be compared for this new dataset. Even if the UNet is proposed as an option in this paper, this is mentioned as one of the baselines in the anonymous reference which introduces the ZAPBench benchmark, further questioning the technical contribution of this work. Therefore, although the application brings important contributions, the methodological contributions, the thrust of this work, need to be improved.\", \"additional_comments_on_reviewer_discussion\": \"During the rebuttal period, reviewers requested additional baselines, and the authors responded by incorporating baselines from the anonymous reference (Anonymous, 2024), which is the ZAPBench benchmark paper. While the reviewers appeared satisfied with this response, it is notable that these baselines do not include comparisons with volumetric video models, a critical omission given the paper\\u2019s focus on volumetric forecasting. Additionally, the paper overly relies on the benchmark results of Anonymous, 2024, with UNet cited as a volumetric video model baseline but with additional details deferred to the referenced work, raising concerns about the independence of this submission. Although the reviewers raised their scores, they may not have fully recognized these issues. Given the standard nature of the technical contributions, the model-specific analyses, and the close dependence on the anonymous submission, I have decided to recommend rejection.\"}", "{\"summary\": \"This paper proposes a new approach to neuronal response modeling by predicting/forecasting the volumetric video instead of the per-neuron calcium trace (dF/F) or spike train, which is the norm in neural response prediction. This approach allows the model to take advantage of the inter-cell activity and spatial organization of the population that is typically discarded when deconvolving the volumetric video to individual response traces. The authors evaluated a range of video and trace-based models on ZAPBench and showed that the video-based model outperforms trace-based models in short temporal context length conditions.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"To my knowledge, neural response prediction/forecasting directly from volumetric video is novel. This allows minimal preprocessing of the data which can be beneficial to deep learning-based methods.\", \"A wide range of training and evaluation conditions are compared, including trade-offs of spatial and temporal resolution, pre-training vs direct training, and training set size and combinations. These empirical results can guide future work in modeling neural responses.\"], \"weaknesses\": [\"Please find my suggestions for the following points in the Question section.\", \"A key motivation of this work, as stated by the authors in Section 2, is that the typical deconvolution step to convert volumetric video to dF/F response traces can lead to loss of information, such as cell spatial organization, inter-cell activities, etc. However, while the video-based model appears to outperform the trace-based model in short temporal context-length conditions (though similar performance in longer context length), it is unclear whether or not this is due to the additional information that exists in the raw data and that the video-based model is indeed taking advantage of such information.\", \"MAE might not be the most intuitive metric for getting a sense of how the (video and traced-based) models are performing. For instance, I am not sure if an MAE value of 0.02 is good or bad, or how big of a difference is an MAE of 0.02 to 0.04? In particular, I believe ZAPBench is a new dataset and we don\\u2019t have any other models to compare against these MAE values, other than the single trace-based model provided.\", \"Unclear trade-off in computation cost between video-based and trace-based models.\"], \"questions\": [\"Major\", \"It would be nice to test whether or not the trace-base model performed (relatively) poorly due to the lack of inter-cell activities or imperfect masking as suggested in Section 2 and Figure 2. I suggest applying the segmentation masks to the video, i.e. set regions outside of the identified cells as background, and train the video-based model on the masked videos. This should give us a sense of the influence of inter-cell activities and imperfect masking, and isolate the influence of spatial organization of the cells.\", \"I suggest the authors include metrics that are commonly used in neural response prediction so that readers can have a sense of how well these models are performing, such as normalized correlation ($CC_\\\\text{norm}$) [1], or fraction of explainable variance explained (FEV) [2], basically metrics that takes trial-to-trial variability into account.\", \"Can the authors comment on the computational cost of the models? The authors stated in the hyperparameter search section and appendix A.3 that 16 A100 40GB GPUs are used to train the video-based model, and ~5k GPU hours (so ~300 hours in wall-time) was used in the loss ablation experiment in Figure 4, which is a considerable amount of computational time and cost. Can the authors share the time it took to train the final (best) video-based and trace-based models? I believe the authors should discuss the trade-off between the two approaches in computation cost if they are indeed substantially different. To clarify, I think it is fine for the method to be more computationally expensive than other methods, but it is important to point it out.\", \"Minor\", \"What is the frame rate of the video?\", \"Why and how are the two temporal context lengths (4 and 256) selected? Does it make sense to predict the future 32 frames from only 4 frames?\", \"In the hyperparameters section and Figure 1, it is stated that the models optimize the trace-based MAE. Does this include the video-based model? Since the video-based model inputs and outputs a video, does it make a difference to optimize the recorded and predicted video MAE?\", \"How are the hyperparameters selected? Hand-picked or via some form of hyperparameter search (random search, bayesian search, etc.)\", \"[1] Schoppe, Oliver, et al. \\\"Measuring the performance of neural models.\\\" Frontiers in computational neuroscience 10 (2016): 10.\", \"[2] Cadena, Santiago A., et al. \\\"Deep convolutional models improve predictions of macaque V1 responses to natural images.\\\" PLoS computational biology 15.4 (2019): e1006897.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces a novel method for neural activity forecasting of zebrafish that works on the raw volumetric video instead of using standard preprocessing that reduces the original 4D space to 1D (trace matrix) and disregards spatial relationships between neurons. To do this, a u-net architecture is employed, taking advantage of a large scale neural dataset and performing extensive ablations for model selection. Multiple measures were taken to enable scaling the architecture for this computationally expensive problem, such as using the temporal context dimension as input channels, lead-time conditioning, and distributed training. The ablation results show that (1) pretraining on other specimens does not help, (2) there is a trade-off between spatial and temporal context, and (3) that downsampling input resolution up to 4x is beneficial to performance. Compared to the best trace-matrix models, the proposed multivariate model achieves 10% reduction in MAE for the short context forecasting setting in both the test and the holdout sets, while it is comparable to the trace-matrix models for long-context forecasting in the test set and 12% better in the holdout set.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The paper is original in the sense of creative combinations of existing ideas (the components of the model that enable scaling), application to a new domain (forecasting on minimally preprocessed / raw data from weather to neural data), as well as removing limitations from previous works (removing dependency on segmentation mask accuracy and avoiding loss of information caused by conversion to trace-matrix). Writing style is very high quality, and all the methods and results get across in a relatively clear manner to the reader. To the specific line of research (forecasting neural data), the method proposed seems to be a significant step forward in the field\\u2019s development, moving from hand-crafted to learnable features.\", \"weaknesses\": [\"There are some presentation issues that impact the understanding of a reader that is not very familiar to forecasting neural data.\", \"These are mainly in the abstract and introduction, while once entering section 2 all misunderstandings of this reader were resolved. Nevertheless it would be beneficial for the paper to have them fixed. (1) Until section 2 it was not very clear that the goal is to predict future steps from previous steps, and not one neural modality (e.g. electrical signal) from another (e.g. blood oxygen). (2) It was not clear that the neuron segmentation mask is applied in both the trace models and the proposed model, but at different points in the pipeline, i.e. in the latter the forecasting itself is done on the volumetric video and afterwards the mask is applied before computing the error - without knowing this it is not clear how the two methods can be compared fairly. It would also help if in Figure 1 the same notation was used between orange (trace) and blue (proposed) in the segmentation mask block, i.e. instead of \\u201cExtract Neurons\\u201d and \\u201cMask Neurons\\u201d say \\u201cApply segmentation mask\\u201d in both cases. (3) In the abstract the phrase \\u201cwe design a model to handle the high resolution and large receptive fields\\u2026\\u201d is structured in a confusing way where the reader does not understand if the large receptive fields are an aspect of the model or the recordings. (4) Minor - the footnote on page 2 is not so much footnote information, but rather more suitable for the main text.\", \"Additionally, Figure 2 should have a more informative caption that explains better what is shown, e.g. it is not clear what the colored blobs are, segmentation masks?\", \"Is H=32 the only setting that is tested and why? Not sufficiently described in the paper.\", \"In the conclusion, calling the findings counterintuitive seems excessive; there is no reason to assume that high input resolution, pretraining, or increased model capacity works well for all domains and applications. Results sufficiently showcase enough reasons why these sometimes useful settings might lead here to overfitting, distribution shifts, etc.\"], \"questions\": \"Could \\\"scaling the field of view of the network so the size remains constant while increasing to full resolution\\\" make the full-resolution model handicapped in terms of field of view? There is a chance this is already answered in the paper but the reader missed it.\\nOther than that, fixing the presentation issues in the previous section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"One additional comment from our side. After the discussion above, we decided to perform additional checks to narrow down the sources of improved performance. We rendered two \\\"synthetic calcium movies\\\": one (\\\"rendered traces\\\") with the voxels of the segmented cells set to the corresponding trace value (uniformly throughout each cell), and one (\\\"shuffled traces\\\") with the traces randomly reassigned to different cells. Training video models on this data, we observed that the model using \\\"rendered traces\\\" performs equivalently to the ones using the full df/f volume and the segment-masked df/f volumes. The \\\"shuffled traces\\\" variant however showed statistically worse test-MAE.\\n\\nThese additional experiments strengthen our original conclusion that the distribution of fluorescent signal within individual cells and throughout unsegmented voxels has negligible impact on forecasting accuracy, and that the additional accuracy of the video model stems for the utilization of multivariate, cross-cell information -- precisely the type of information that trace-based models in ZAPBench have difficulty using.\\n\\nWe will add a discussion of these experiments to the camera-ready version of the paper should it be accepted.\"}", "{\"comment\": \"Thank you for making the requested changes, and clarifying those points.\"}", "{\"title\": \"Official Comment by Reviewer 1oJD\", \"comment\": \"Thank you for your response. Regarding the concern about whether this paper exceeds the thematic scope of ICLR, the authors have provided relevant references as evidence, which have addressed the issue satisfactorily. Additionally, I appreciate the authors' revisions to the abstract and introduction sections. As a result, I will increase the Presentation score to 2.\\n\\nHowever, in terms of experiments, the selected baselines and evaluation metrics remain insufficiently robust. Furthermore, as shown in Figure 6, under the conditions of Test C = 256 and Holdout C = 256, the proposed method shows almost no improvement over the baseline (Trace). Under the condition of Holdout C = 4, the performance of the proposed method is notably weaker than the baseline (Trace) when the number of steps exceeds 5. Therefore, I do not believe this paper meets the acceptance standard, and I will maintain my current score.\"}", "{\"title\": \"(continuation of response above)\", \"comment\": \"> I kindly request that the authors highlight the modified content in the manuscript during the rebuttal phase. Reviewing the changes by comparing the old and new versions line by line is very time-consuming.\\n\\nPlease note that OpenReview already provides a PDF diff tool to highlight changes between revisions (see https://draftable.com/compare/HtOiPciNxaoO for the current paper). We also summarized all changes made to the manuscript in a top-level comment (https://openreview.net/forum?id=4UXIGATUTj&noteId=BF1AuZOTf5)\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": [\"Thank you for the positive review acknowledging the quality of the work and the significance of advancing neuronal forecasting. In the following, we address your concerns and questions:\", \"We edited the abstract and introduction to improve the clarity issues you pointed out regarding the task, segmentation mask, and receptive fields.\", \"We extended the caption of Figure 2 as suggested. Indeed, the colored blobs denote the segmentation masks of individual cells.\", \"In the paper we follow the benchmark setup of ZAPBench, which poses the task of predicting $H=32$ future timesteps. Longer time horizons currently seem to be less interesting as performance of all existing methods significantly deteriorates to the level of naive baselines beyond 32 timesteps. We note that shorter time horizons are automatically part of the current evaluation.\", \"We completely agree with your assessment that these techniques should not be expected to work (out of the box) in all domains. This is what we wanted to express by calling them \\\"counterintuitive\\\" -- to call the attention of the reader to the fact that different strategies might be required in the video forecasting domain. We have now softened the statement and just say \\\"contrary to our expectations\\\" instead.\", \"> Could \\\"scaling the field of view of the network so the size remains constant while increasing to full resolution\\\" (line 349) make the full-resolution model handicapped in terms of field of view?\", \"The higher resolution models use one and two more downsampling blocks for the 2x and full resolution models, respectively. Therefore, we make sure they are not handicapped in terms of their capacity and field of view. Details can be found in Appendix 1.3 and specifically from line 644 starting with \\u201cTo ensure equitable comparison, the architectures of these models were kept broadly consistent, with necessary adjustments to accommodate the differing input resolution\\u201d.\"]}", "{\"comment\": \"We thank all reviewers for their feedback, and appreciate the positive comments related to the novelty of our approach of using deep learning techniques to directly process 4d neural activity recordings, the significance of this work for the field of neuroscience, and the extensive evaluations and ablations.\\nWe replied to each reviewer individually to address their suggestions.\\n\\nLimited baselines were listed as a concern by multiple reviewers, so here we additionally highlight that ZAPBench itself reports the results of multiple timeseries-based forecasting approaches, and we compare to the *best* performing methods for the $C=4$ and $C=256$ conditions (TSMixer, and MLP, respectively) for clarity. For completeness, we added Figure 9 to the appendix, which contains the other baselines.\", \"the_revised_paper_contains_the_following_changes\": [\"Revised the abstract and introduction to improve clarity regarding the task, segmentation mask, receptive fields, and relevance of neural activity forecasting.\", \"Updated labels in figure 1.\", \"Updated caption of figure 2.\", \"Updated figures 6, 7, and 8 to reflect revised hold-out condition results on ZAPBench for trace-based models, along with corresponding text changes.\", \"Updated discussion of metrics with additional references.\", \"In the conclusion section, rephrased 'counterintuitive' to 'contrary to our expectations'.\", \"Added masking experiment in Appendix A.2.\", \"Added supplementary figure (Fig. 9) to Appendix A.3 showing performance relative to four trace-based approaches included in ZAPBench.\", \"Added supplementary figure (Fig. 10) to Appendix A.3 illustrating MAE differences.\", \"Added more detailed compute stats to Appendix A.4.\"]}", "{\"comment\": \"Thank you for the additional comments!\\n> I thank the authors for performing this additional experiment. The model trained on segmented video should have some of the constraints as the trace-based model does (i.e. imperfect segmentation, elimination of inter-cell activities, etc), though it still achieved similar performance as the model trained on raw volumetric video. Does this result contradict some of the motivations stated in the paper: \\\"While this is a natural choice, it loses information related to cell size, position and spatial distribution of intensities within it, and completely discards voxels that are not part of any segmentation mask or incorrectly segmented. Figure 2 depicts these potential issues.\\\"?\\n\\nIndeed, we believe that this result narrows down the range of possible sources of the empirically improved performance and clarifies which hypotheses we posed as the motivation for this line of work in section 2 are correct. Specifically, it suggests that segmentation quality is not a significant limitation, and that the gains can be attributed to the better utilization of the spatial distribution of the observed fluorescence signal. This interpretation is further supported by the spatial context results in Fig. 5 and the input resolution results in Table 2, suggesting that the correlations between cells, but not the intra-cellular distribution of fluorescence, drive the increased accuracy. We included these comments in section 4.2 of the paper.\\n\\n> If there aren't enough repetitions in each trial to use metrics that account for trial-to-trial variability, can the authors at least provide the single-trial correlation between recorded and predicted responses (average over neurons)? Again, it is hard to interpret MAE values, especially without knowing the range of the calcium response.\\n\\nThank you for this suggestion. We have added correlation scores between predicted and recorded responses to the paper (Figure 13 and Table 3 in appendix A.4), in addition to mentioning the range of the underlying df/f calcium signals (-0.25, 1.5) in section 2. We find that the proposed video-based model achieves a stronger correlation with the recorded responses, 40% higher in the short context regime ($C=4$) and 14% higher in the long context regime ($C=256$). Furthermore, we observe that for $C=4$, video models show positive correlation throughout the complete prediction window $H=32$, unlike trace-base models which become uncorrelated after 20 time steps.\\n\\n> Thank you for providing the computational cost information. Given the vast difference in computation cost between the trace-based and video-based models (2 hours on a single A100 GPU vs 36 hours on 16 A100 GPUs), I encourage the authors to include this limitation in the main text, perhaps in the discussion/conclusion section.\\n\\nWe have now added a mention of this in the discussion section.\\n\\n> It remains unclear why training the model on raw volumetric video is more advantageous than the existing approach of segmented calcium traces. The additional analysis provided by the authors shows that the models trained on raw and segmented volumetric video achieved similar performance, so is the performance gain solely due to the additional information on the spatial organization of the cells? If so, the experiments conducted in this paper haven't demonstrated that.\\n\\nWe observe that once the signal from the unsegmented voxels is eliminated, as was the case in the experiment we ran, the only advantage that the video model has relative to the trace model is access to the spatial distribution of the underlying fluorescence signals. Together with the network FOV and input resolution results already reported in the paper, we believe these data points to be strongly indicative of the video model better utilizing the multivariate nature of the input signals.\\n\\n> MAE can be a good optimization objective, but it is unintuitive as a metric for comparing models. I suggest adding correlation-based metrics (single trial correlation, average correlation, FEV, etc.), which are commonly used in neural response prediction [1, 2].\\n\\nThis has now been addressed. We hope the reviewer and other readers will find that these additional metrics provide more context for the interpretation of our results.\"}", "{\"title\": \"Official Comment by Reviewer 1oJD\", \"comment\": \"Thank you for your response . While some of my concerns have been addressed, I still have a few remaining questions.\\n\\n1. I tentatively agree with the authors' perspective about the thematic scope of ICLR. However, I request that the authors provide references to previously published works on similar topics in ICLR as supporting evidence.\\n\\n2. The motivation for this study is unclear. Although the authors discussed the significance of this study in the introduction, they did not address the necessity of forecasting whole-brain neuronal activity. **My primary concern is understanding the role of forecasting whole-brain neuronal activity in downstream tasks within neuroscience.** If the authors' work is merely predicting subsequent frames of brain imaging data based on earlier frames, its contribution appears limited, as numerous similar studies already exist, such as video frame prediction in computer vision and time-series forecasting in machine learning.\\n\\n3. Although the authors have added baselines, they remain insufficient. Furthermore, the study lacks a broader range of evaluation metrics; I note that only MAE is used throughout the manuscript. I would prefer to see evidence that the proposed large-scale training approach can improve the performance of certain meaningful downstream tasks.\\n\\n4. I kindly request that the authors highlight the modified content in the manuscript during the rebuttal phase. Reviewing the changes by comparing the old and new versions line by line is very time-consuming.\"}", "{\"comment\": \"I thank the authors for their detailed response, please find my follow-up questions and comments below.\\n\\n> Thank you for this suggestion. We have now performed this experiment and added the results in the updated revision of the paper, finding that: \\u201cThe grand average test MAE for that model (0.0266\\u00b10.0046) was not significantly different from that of the video model processing the complete volume (0.0267\\u00b10.0042). This indicates that the unsegmented voxels are unlikely to contain information that could improve forecasts and that any gains relative to the trace-based models can be attributed to the utilization of the spatial distribution of the underlying calcium signals within and across the segmented cells.\\u201d\\n\\nI thank the authors for performing this additional experiment. The model trained on segmented video should have some of the constraints as the trace-based model does (i.e. imperfect segmentation, elimination of inter-cell activities, etc), though it still achieved similar performance as the model trained on raw volumetric video. Does this result contradict some of the motivations stated in the paper: \\\"While this is a natural choice, it loses information related to cell size, position and spatial distribution of intensities within it, and completely discards voxels that are not part of any segmentation mask or incorrectly segmented. Figure 2 depicts these potential issues.\\\"? \\n\\n> Thank you for the suggested metrics. Unfortunately, we do not have sufficient repetitions within the trials to reliably estimate the variances required by these metrics. Nonetheless, we updated the paper to refer to the mentioned prior work as potential future evaluation metrics for cases where more trials are available. While an improved MAE on the test set clearly indicates a better generalizing model, we agree that the absolute numbers are hard to interpret. To provide an intuition of what an MAE difference of e.g. 0.005 as reported in our results can look like, we included a new supplementary figure (Figure 10 in the revised manuscript).\\n\\nIf there aren't enough repetitions in each trial to use metrics that account for trial-to-trial variability, can the authors at least provide the single-trial correlation between recorded and predicted responses (average over neurons)? Again, it is hard to interpret MAE values, especially without knowing the range of the calcium response. \\n\\n> We have added this information to Appendix A.4. We acknowledge the significantly increased computational cost of this approach, but also note that the reduced extent to which the raw data needs to be preprocessed when forecasting directly in the video domain.\\n\\nThank you for providing the computational cost information. Given the vast difference in computation cost between the trace-based and video-based models (2 hours on a single A100 GPU vs 36 hours on 16 A100 GPUs), I encourage the authors to include this limitation in the main text, perhaps in the discussion/conclusion section.\\n\\nAll in all, my main concerns are:\\n1. It remains unclear why training the model on raw volumetric video is more advantageous than the existing approach of segmented calcium traces. The additional analysis provided by the authors shows that the models trained on raw and segmented volumetric video achieved similar performance, so is the performance gain solely due to the additional information on the spatial organization of the cells? If so, the experiments conducted in this paper haven't demonstrated that.\\n2. MAE can be a good optimization objective, but it is unintuitive as a metric for comparing models. I suggest adding correlation-based metrics (single trial correlation, average correlation, FEV, etc.), which are commonly used in neural response prediction [1, 2].\\n\\nFor these reasons, I maintain my original score.\\n\\n[1] Schoppe, Oliver, et al. \\\"Measuring the performance of neural models.\\\" Frontiers in computational neuroscience 10 (2016): 10.\\n\\n[2] Turishcheva, Polina, et al. \\\"The dynamic sensorium competition for predicting large-scale mouse visual cortex activity from videos.\\\" ArXiv (2023).\"}" ] }
4T33izzFpK
metabench - A Sparse Benchmark of Reasoning and Knowledge in Large Language Models
[ "Alex Kipnis", "Konstantinos Voudouris", "Luca M. Schulze Buschoff", "Eric Schulz" ]
Large Language Models (LLMs) vary in their abilities on a range of tasks. Initiatives such as the Open LLM Leaderboard aim to quantify these differences with several large benchmarks (sets of test items to which an LLM can respond either correctly or incorrectly). However, high correlations within and between benchmark scores suggest that (1) there exists a small set of common underlying abilities that these benchmarks measure, and (2) items tap into redundant information and the benchmarks may thus be considerably compressed. We use data from n > 5000 LLMs to identify the most informative items of six benchmarks, ARC, GSM8K, HellaSwag, MMLU, TruthfulQA and WinoGrande (with d = 28,632 items in total). From them we distill a sparse benchmark, metabench, that has less than 3% of the original size of all six benchmarks combined. This new sparse benchmark goes beyond point scores by yielding estimators of the underlying benchmark-specific abilities. We show that these estimators (1) can be used to reconstruct each original individual benchmark score with, on average, 1.24% root mean square error (RMSE), (2) reconstruct the original total score with 0.58% RMSE, and (3) have a single underlying common factor whose Spearman correlation with the total score is r = 0.94.
[ "llm", "benchmarking", "item response theory", "factor analysis", "information" ]
Accept (Poster)
https://openreview.net/pdf?id=4T33izzFpK
https://openreview.net/forum?id=4T33izzFpK
ICLR.cc/2025/Conference
2025
{ "note_id": [ "tvOUxFk47w", "rKAbDmKCzR", "ofgTEwn8MA", "kAXlmZ8ghW", "egDyYYdsJp", "bLXEKvZpp0", "aKIgwNRrnG", "Yo2pgKVGeY", "X7By4h0TB4", "VPN9nuLRUm", "SWEVjx6c4b", "SRB8RjL4YF", "NeubKN0sQ6", "HgqP2C6U19", "H4ZzwEqdo8", "5WNUzhlCgv", "4hxVOBprsA", "2afry61nrr" ], "note_type": [ "meta_review", "decision", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1734400135302, 1737523808587, 1730775321567, 1732629074613, 1732221137784, 1733179398419, 1732220896348, 1732220173526, 1732220647939, 1730713076142, 1732221294530, 1732952232629, 1732221105928, 1730686571419, 1730723936437, 1732221653336, 1732579842871, 1732220704840 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission6994/Area_Chair_6rHi" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission6994/Reviewer_rtaB" ], [ "ICLR.cc/2025/Conference/Submission6994/Authors" ], [ "ICLR.cc/2025/Conference/Submission6994/Authors" ], [ "ICLR.cc/2025/Conference/Submission6994/Reviewer_iUxe" ], [ "ICLR.cc/2025/Conference/Submission6994/Authors" ], [ "ICLR.cc/2025/Conference/Submission6994/Authors" ], [ "ICLR.cc/2025/Conference/Submission6994/Authors" ], [ "ICLR.cc/2025/Conference/Submission6994/Reviewer_cph2" ], [ "ICLR.cc/2025/Conference/Submission6994/Authors" ], [ "ICLR.cc/2025/Conference/Submission6994/Reviewer_cph2" ], [ "ICLR.cc/2025/Conference/Submission6994/Authors" ], [ "ICLR.cc/2025/Conference/Submission6994/Reviewer_Fxth" ], [ "ICLR.cc/2025/Conference/Submission6994/Reviewer_iUxe" ], [ "ICLR.cc/2025/Conference/Submission6994/Authors" ], [ "ICLR.cc/2025/Conference/Submission6994/Reviewer_rtaB" ], [ "ICLR.cc/2025/Conference/Submission6994/Authors" ] ], "structured_content_str": [ "{\"metareview\": \"The authors were able to address all issues raised by the reviewers. All reviewers except one were positive about the work. That reviewer was not able to respond, but the authors addressed the issues by them by adding more experiments to evaluate the potential for data contamination, clarify the applicability of assumptions, and discuss alternative data selection strategies.\", \"additional_comments_on_reviewer_discussion\": \"The rebuttal addressed reviewer concerns. The authors re-ran the pipeline with additional random seeds, and implemented alternative item selection strategies. They addressed memorization concerns by testing their benchmark with disjoint sets, permuted answer labels, and fine-tuned models, showing that memorization risks can be mitigated with multiple benchmark versions. The authors also validated their use of Item Response Theory (IRT) and clarified its application to LLM evaluation. In response to suggestions, they conducted simulation studies to explore scenarios involving multiple latent abilities. Overall, the authors provided strong evidence for the reliability and utility of metabench, resulting in an improved consensus towards acceptance.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"summary\": \"This paper considers the six LLM benchmarks included in the Open LLM Leaderboard (ARC, GSM8K, HellaSwag, MMLU, TruthfulQA, and WinoGrande) and seeks to create a much smaller benchmark that is predictive of the original suite by subselecting items. This is done using data from more than 5000 LLMs included in the leaderboard and a pyschometric method called item response theory (IRT) which in essence fits a model that estimates the item's difficulty and how well the item discrimates between models whose \\\"abilities\\\" are close to the item's difficulty. (Note this model ability is also fit by the method in an alternating fashion.) The presented method results in a benchmark that is only 3% the size of the original benchmark but is able to effectively reconstruct both the original individual benchmark scores and the joint score. Finally, using factor analysis, the authors demonstate that a single latent is predictive of all 6 benchmarks.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"The paper is well-written and clearly communicates its ideas and methods. In the choice of the IRT, multiple models and methods for estimating the ability are explored. The method proposed produces a much smaller benchmark which the authors demonstrate has better predictive power than randomly subsampling items (Figure 1B). Careful consideration is given to potential limitations of the method, including assumptions about the conditional independence of the LLMs used for the study. The work also considers the interesting idea of a benchmark that performs adaptive testing in which items are selected sequentially based on a current estimate of the model's ability.\\n\\nOverall I think the paper makes meaningful contributions to studying LLM benchmarks and making model evaluation more efficient, and I thus lean towards acceptance. However, I do think the benchmarks considered are missing some of the abilities that people seek to measure in LLMs (e.g. coding), somewhat limiting the work's impact. I seek to provide concrete suggestions regarding this in the next section.\", \"weaknesses\": \"My comments in this section are not intended to be required changes to the paper but rather a discussion of what I think the authors could add to have more significant impact.\\n\\nCurrently the main output of the paper is a much smaller benchmark that can be used to efficiently rank models on the six benchmarks as well as evidence from factor analysis that all six benchmarks are measuring a single latent ability. However, across the broader field of LLM benchmarks, it is generally assumed that there are multiple latent dimensions to the abilities of LLMs. For example, if a code benchmark was added into the set, I would assume this would require another latent dimension to fit model performance, and it would be intriguing if this was not true! Also I would be curious if a larger fraction of the test items is required to reconstruct the scores when the set of included benchmarks require multiple latent ability dimensions to represent.\\n\\nIn essence, the most interesting direction I see for this work is to apply the methods to a more comprehensive set of benchmarks to try to discover latent ability dimensions that might be interpretable as what we think of as LLM capabilities. This should then also provide a characterization of which of these abilities each benchmark measures.\", \"questions\": \"For quite a few models on the leaderboard, the MMLU score will be random chance (~25%, which you can see in Figure 1). Would it be a useful preprocessing step to subtract out random chance from the score and renormalize? E.g. take (score - 0.25) / (1 - 0.25).\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you very much for your constructive review, your fruitful suggestions, and for raising the score to an 8. We deeply appreciate it!\"}", "{\"comment\": \"**References**\\n\\n[1] Ramachandran, R., Kulkarni, T., Sharma, C., Vijaykeerthy, D., & Balasubramanian, V. N. (2024). On Evaluation of Vision Datasets and Models using Human Competency Frameworks. arXiv preprint arXiv:2409.04041.\\n\\n[2] Sorscher, B., Geirhos, R., Shekhar, S., Ganguli, S., & Morcos, A. (2022). Beyond neural scaling laws: beating power law scaling via data pruning. Advances in Neural Information Processing Systems, 35, 19523-19536.\\n\\n[3] Paul Irwing, Tom Booth, and David J. Hughes (eds.). The Wiley Handbook of Psychometric Testing: A Multidisciplinary Reference on Survey, Scale and Test Development. Wiley, 1 edition, April\\n2018. ISBN 978-1-118-48983-3 978-1-118-48977-2. doi: 10.1002/9781118489772\\n\\n[4] Fernando Mart\\u00ednez-Plumed, Ricardo BC Prud\\u00eancio, Adolfo Mart\\u00ednez-Us\\u00f3, and Jos\\u00e9 Hern\\u00e1ndez-\\nOrallo. Making sense of item response theory in machine learning. In ECAI 2016, pp. 1140\\u20131148.\\nIOS Press, 2016.\\n\\n[5] Fernando Mart\\u00ednez-Plumed, Ricardo B.C. Prud\\u00eancio, Adolfo Mart\\u00ednez-Us\\u00f3, and Jos\\u00e9 Hern\\u00e1ndez-\\nOrallo. Item response theory in AI: Analysing machine learning classifiers at the instance level.\\nArtificial Intelligence, 271:18\\u201342, June 2019. ISSN 00043702. doi: 10.1016/j.artint.2018.09.004.\\n\\n[6] Pedro Rodriguez, Joe Barrow, Alexander Miserlis Hoyle, John P. Lalor, Robin Jia, and Jordan\\nBoyd-Graber. Evaluation Examples are not Equally Informative: How should that change NLP\\nLeaderboards? In Proceedings of the 59th Annual Meeting of the Association for Computational\\nLinguistics and the 11th International Joint Conference on Natural Language Processing (Volume\", \"1\": \"Long Papers), pp. 4486\\u20134503, Online, 2021. Association for Computational Linguistics. doi:\\n10.18653/v1/2021.acl-long.346.\\n\\n[7] Clara Vania, Phu Mon Htut, William Huang, Dhara Mungra, Richard Yuanzhe Pang, Jason Phang,\\nHaokun Liu, Kyunghyun Cho, and Samuel R. Bowman. Comparing Test Sets with Item Response\\nTheory, June 2021.\\n\\n[8] Xiting Wang, Liming Jiang, Jose Hernandez-Orallo, David Stillwell, Luning Sun, Fang Luo, and\\nXing Xie. Evaluating General-Purpose AI with Psychometrics, December 2023a.\\n\\n[9] Hern\\u00e1ndez-Orallo, J. (2017). Evaluation in artificial intelligence: from task-oriented to ability-oriented measurement. Artificial Intelligence Review, 48, 397-447.\"}", "{\"comment\": \"Thank you for your clarification. I will keep my score.\"}", "{\"comment\": \"We are happy about the positive review of our paper and thank the reviewer for motivating us to further investigate whether our mitigation strategies against memorization are valid.\\n\\n> As mentioned in the limitations section, a smaller benchmark has the risk of being memorized.\\n\\nThe reviewer is correct to remark on the problem of memorization and contamination. We have included a discussion of this in a new section (3. Using metabench), complementing the discussion in the section B.6 of the appendix. Specifically, we have created four versions of metabench:\\n- Version A: The main set of items from each benchmark as presented in the paper.\\n- Version B: Another similarly sized set of items from each benchmark with 0 overlap with version A (useful when an LLM is evaluated multiple times on version A)\\n- Versions A and B, in which the answer choices are randomly permuted (thus, if an LLM has memorized only the response number, it should fail to choose the new correct option)\\n\\nIn section 3, we present an experiment testing the effect of memorization on performance on each of these sets. The analysis consists of three stages.\\n1. We evaluated three LLMs on each version of metabench.\\n1. We fine-tuned the on version A with fixed multiple choice orders.\\n1. We then evaluated them on all four versions again.\\n\\nWhile performance on the training set significantly improved, as expected, this performance boost was attenuated by permuting the answer choices, and performance did not improve significantly on the disjoint set. Therefore, having four versions of metabench can go some way to mitigating the problems of memorization and contamination, although of course we must ultimately trust that researchers would never explicitly train on any component of metabench if they want to use it to make meaningful inferences about performance or ability.\\n\\n\\n\\n> Will a small benchmark lead to a large variance in evaluation?\\n\\nThis is a crucial point and we thank the reviewer for drawing our attention to it. If a regression model is unbiased (its average residual is 0), then its MSE is identical with the prediction error variance. Thus, the RMSE is identical with the standard deviation of the error variance. In the paper we show how the test RMSE grows _sublinearly_ with the size of cv-subsampled benchmarks (Section 2.2 and Appendix B.2): While the error variance increases with shrinking benchmark sizes, it does so very slowly until a certain point. Because there is so much overlap in the information measured by all items, we lose very little by discarding a large proportion of benchmark items. The important aspect is that the remaining items cover enough space of the ability landscape tested by the benchmark. This is exactly why we use information filtering to construct metabench.\"}", "{\"comment\": \"We thank all reviewers for their constructive and helpful feedback. Their input was immensely valuable to further improve our submission. The reviewers\\u2019 assessment was overall positive with only one reviewer giving a score of 5:\\n- Reviewer iUxe deemed our methods as \\u201cstreamlined and cost-effective\\u201d.\\n- Reviewer Fxth found our analyses \\u201ccomprehensive [and] thorough\\u201d and deemed our results \\u201cimpressive\\u201d.\\n- Reviewer rtaB praised our paper as \\u201cwell-written [and] clear\\\". They further assessed that our paper and benchmark are \\\"a meaningful contribution[...] to studying LLM benchmarks and making model evaluation more efficient\\u201d.\\n- Reviewer cph2 added that our paper is \\u201cwell-organized, with a clear explanation of goals and [...] techniques\\u201d. They also acknowledged our contribution as \\u201csubstantial [for the field of] LLM evaluation\\u201d.\\n\\nIn response to the reviewers' feedback, we have made the following major additions:\\n1. We re-ran the entire pipeline for benchmark distillation with 4 additional seeds and checked the stability of our reported results with regard to test error, rank preservation and benchmark sizes.\\n2. We implemented a clustering-based item selection technique whose performance we compared with our information filtering approach, while keeping all other variables constant.\\n3. We fit 2-dimensional IRT models and tested if adding a second latent ability significantly improved score reconstruction.\\n4. We validated our strategies against test memorization by fine-tuning 3 LLMs on the main set of metabench and testing its performance on the disjoint alternative set and on both sets with permuted answer labels.\\n5. We ran a simulation study that reveals, under which conditions synergy effects across benchmarks arise for score reconstruction.\\n\\nThese analyses, its results, and answers to further reviewer questions are contained in our responses to the individual reviews below. Corresponding code and figures are contained in the Supplementary Material. We again want to thank the reviewers for their valuable time, attention and for actively taking part in the review process.\", \"update\": \"We have worked in the corresponding changes into our current version of the manuscript and updated the pdf available on openreview.\"}", "{\"comment\": \"[Part 1/2]\\n\\nWe thank the reviewer for their thoughtful and encouraging review of our paper. We are glad the reviewer found our paper well-written and carefully thought through. The reviewer also inspired further investigation on the dimensionality and broadness of latent abilities.\\n> if a code benchmark was added into the set, I would assume this would require another latent dimension to fit model performance\\n\\nThank you for raising this interesting question, it is very plausible that performance on a coding benchmark does not only depend on some general ability to solve analytical problems, but that it would also depend on knowledge in programming languages, program architecture, data structures etc. In that case, a 2-dimensional IRT-model (2 latent abilities) is likely superior to a one-dimensional IRT model to reconstruct a coding benchmark\\u2019s score. However, It is unclear how much unique structure this would add to the covariance matrix across latent abilities for multiple benchmarks - that is, if that would warrant a second latent ability in FA.\\n\\nIn fact, your question points towards a broader topic. There\\u2019s an important distinction to make between our IRT-based benchmark distillation and the FA results:\", \"irt\": \"For each benchmark, we essentially fit a distinct one-dimensional IRT model aiming to capture the latent ability tested by the corresponding benchmark. One can view it as the test-specific aptitude. This approach yields six distinct latent abilities.\", \"fa\": \"We show with FA that these abilities are largely governed by one more abstract ability. However, we do not enforce this dependence structure through our analysis choices: We do not fit a single IRT model on all benchmarks at once to find one-dimensional latent ability that is captured jointly by all six benchmarks.\\n\\nThis, on the other hand, raises the question of why we fit one-dimensional IRT models. We added the following paragraph to the appendix and add a reference to it in Section 2.3:\\n\\n_How many distinct abilities play into solving a benchmark?_ This is a nuanced question and is distinct from _how many ability dimensions are sufficient for the purpose of score reconstruction_. While conceptually it is a hard claim that a single ability governs test performance, there are three reasons for using one-dimensional IRT models in our pipeline:\\n1. _Item selection_: Information filtering scales well in the one-dimensional case. Finding the most-informative items in a multi-dimensional space opens up new problems: Do we marginalize out each dimension? Do we search in the joint space? If yes, what information coverage is desirable, which parts of the n-dimensional grid are most relevant overall?\\n1. _Estimation variance_: The total number of loading parameters doubles with each added latent ability, which increases the uncertainty in the IRT fit, on which we base the remaining selection process.\\n1. _Diminishing returns_: Table 5 shows that for score reconstruction, the performance boost from using two latent abilities is negligible (when present at all).\", \"table_5\": \"_Two-dimensional latent abilities do not substantially aid score recovery_. For the 350-item version of each benchmark, we fit a 1-dimensional and a 2-dimensional 2PL model and derived MAP estimates of the latent abilities. We then fit a GAM of the original score using either the single latent ability from the 1-dim fit or both latent abilities from the 2-dimensional fit. RMSEs are reported on identical test sets per benchmark.\\n\\n| | ARC | GSM8K | HellaSwag | MMLU | TruthfulQA | WinoGrande |\\n|---|---|---|---|---|---|---|\\n| RMSE(1-dim) | 0.893 | 1.293 | 0.844 | 1.048 | 0.988 | 1.055 |\\n| RMSE(2-dim) | 0.893 | 1.253 | 0.820 | 1.065 | 0.989 | 1.003 |\\n| \\u0394 | 0.000 | 0.040 | 0.024 | -0.017 | -0.001 | 0.052 |\"}", "{\"summary\": \"The paper introduces Metabench, a sparse benchmarking method designed to evaluate large language models (LLMs) with minimal redundancy and resource demands. By analyzing data from over 5000 LLMs across six benchmarks (ARC, GSM8K, HellaSwag, MMLU, TruthfulQA, and WinoGrande), Metabench distills these into a much smaller subset, reducing the combined item count by over 97%. Using psychometric techniques such as Item Response Theory (IRT), Metabench selects the most informative items, facilitating efficient and accurate evaluation while maintaining the integrity of the original benchmarks. The sparse benchmark achieves impressive fidelity, reconstructing original scores with less than 1.24% RMSE on average, and identifying a single common latent factor strongly correlating with general model ability.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"4\", \"strengths\": \"1. The paper\\u2019s technical approach is methodologically sound, with robust use of IRT and statistical modeling to identify informative items.\\n2. It is well-organized, with a clear explanation of Metabench\\u2019s goals and psychometric techniques.\\n3. It makes a substantial contribution to LLM evaluation, providing a novel, efficient, and scalable benchmarking solution.\", \"weaknesses\": \"1. The framework currently focuses on six benchmarks; additional work could explore its applicability across a broader range of LLM tasks or domains.\\n2. Metabench\\u2019s dependence on psychometric models, especially IRT, could be limiting if these models do not fully capture the complexities of LLM behavior, as they were traditionally designed for human subjects.\", \"questions\": \"1. Could authors elaborate on potential limitations when applying Metabench to other domains?\\n2. How might Metabench handle scenarios where specific benchmarks assess unique skills not captured by a general latent factor?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"[Part 1/2]\\n\\nWe thank the reviewer for their detailed and motivating review of our paper. We are glad the reviewer found our results impressive and our analyses comprehensive and thorough. The reviewer also provided crucial pointers for further analyses to prove the soundness of our methodology. In particular, we tested (1) the validity of our memorization mitigation strategies, (2) the performance of an alternative item selection procedure, and (3) the robustness of our results with regard to random seeds.\\n\\n> Memorization risks: (1) Smaller benchmark size increases memorization vulnerability (2) Proposed mitigation strategies need further validation\\n\\nThe reviewer is correct to remark on the problem of memorization and contamination, and we have now performed some experiments to validate our mitigation strategies. We have included a discussion of this in a new section (Section 3. Using metabench), complementing the discussion in section B.6 of the appendix. Specifically, we have created four versions of metabench: two disjoint sets of distinct items from each benchmark (validation version vs. test version), and versions of these where choices in multiple-choice questions are fixed or are randomly re-labelled (standard choices vs. permuted choices).\\n\\nIn section 3, we present an experiment testing the effect of memorization on performance on each of these sets. We evaluated three LLMs on each version of metabench, then fine-tuned them on the validation set of items with fixed multiple choice orders. We then evaluated them on all four versions again. While performance on the finetuning set significantly improved, this performance boost was attenuated by permuting the answer choices, and performance did not improve significantly on the disjoint test set.\\n\\nThis suggests that having four versions of metabench can go some way to mitigating the problems of memorization and contamination, although of course we must ultimately trust that researchers would never explicitly train on any component of metabench if they want to use it to make meaningful inferences about performance or ability.\\n\\n\\n> Theoretical Assumptions: (1) IRT assumptions about LLMs need more justification (2) Independence assumptions between models may be violated due to shared architectures/training data\\n\\nWe refer the reviewer to section B.4 of the appendix for a thorough justification of the use of IRT on artificial agents. In short, item response theory is a statistical method to derive properties of test items based on observed response patterns. It makes no assumptions about the nature of the agents tested using the items [1], and many authors in the ML community have adopted this method already [2-7].\\n\\nIRT does not assume marginally independent responses across agents, but conditionally independent ones (conditioned on the latent ability). In appendix B.8, we show that the reconstruction performance of metabench is unbiased wrt. to LLM architecture. Furthermore, one cannot test for overlap in training data across models. However, analogously, students often have largely overlapping \\u201ctraining data\\u201d due to pre-specified educational curricula - and still IRT is used successfully on student populations. In summary, this suggests that if architecture or training data create substantial dependence in response accuracies not captured by a benchmark\\u2019s corresponding latent ability, the performance of metabench does not suffer from it.\"}", "{\"comment\": \"Thanks for the detailed response. I'll keep my score as is.\"}", "{\"comment\": \"We thank the reviewer for their rewarding and thorough review of our paper. We are happy the reviewer found our methods sound, our paper well-organized, and acknowledged our contribution as substantial. The reviewer also inspired an additional simulation experiment to test under which circumstances our score reconstruction approach can handle latent abilities that are not well covered by a general latent factor.\\n> The framework currently focuses on six benchmarks; additional work could explore its applicability across a broader range of LLM tasks or domains. (...) Could authors elaborate on potential limitations when applying Metabench to other domains?\\n\\nWe hope that the methods and ideas used to create metabench will act as stepping stones for future research on resource-efficient AI benchmarking. There are three important conditions under which applying our methods to a different set of benchmarks promises to be successful:\\n1. IRT-based benchmark distillation requires the availability of large datasets containing single-item accuracies for benchmarks run by thousands of LLMs.\\n1. A reducible benchmark needs to be large and specific enough, such that there is enough information overlap between items to create exploitable redundancy.\\n1. For synergy effects, a substantial number of LLMs needs to be evaluated on multiple benchmarks, such that the correlation structure among benchmark scores can be used for joint score recovery.\\n\\nHowever, since IRT is only based on the accuracy of multiple choice questions, in principle any classification benchmark with quantifiable accuracy can be reduced under the conditions named above. For instance, as an example for large computer-vision datasets, [1] apply IRT on the ImageNet validation set. While they show that IRT methods can be used to distil the validation set into informative subsets, the potential to reduce the entire benchmark remains unexplored to the best of our knowledge (see also [2] for an overview on pruning methods for ImageNet). \\n\\n\\n> Metabench\\u2019s dependence on psychometric models, especially IRT, could be limiting if these models do not fully capture the complexities of LLM behavior, as they were traditionally designed for human subjects.\\n\\nWe refer the reviewer to section B.4 of the appendix for a thorough justification of the use of IRT on artificial agents. In short, item response theory is a statistical method to derive properties of test items based on observed response patterns. In that sense, IRT models only aim to provide a summary of test behavior rather than capture some complex qualitative patterns. Furthermore, IRT makes no assumptions about the nature of the agents tested using the items [3], and many authors in the ML community have adopted this method already as a more nuanced way to analyze benchmark performance [4-9].\\n\\n\\n> How might Metabench handle scenarios where specific benchmarks assess unique skills not captured by a general latent factor?\", \"this_is_a_great_question_and_we_conducted_a_proof_of_concept_simulation_study_to_address_it\": \"We repeatedly simulated two latent abilities for 500 subjects and two benchmarks of 100 items. The first benchmark measures both abilities to a varying degree, and the second test only measures the second ability. We separately varied the correlation between both latent abilities and the extent to which the first benchmark depends on both latent abilities (more details are contained under Rebuttals Figure 2 in the Supplementary Material). If two latent abilities are weakly correlated, they cannot be well described by a general latent factor. Our simulation shows that in this case, score reconstruction for the first benchmark benefits from including the latent ability estimated from the second benchmark, especially if the first benchmark moderately measures the second ability. This suggests that using multiple latent abilities (each derived from a different benchmark) to reconstruct the score of a single benchmark is the way to go in this scenario. Thus, metabench can indeed handle scenarios where specific benchmarks assess unique skills not captured by a general latent factor.\"}", "{\"summary\": \"This paper proposes metabench, a compressed version of six popular LLM benchmarks (ARC, GSM8K, HellaSwag, MMLU, TruthfulQA, and WinoGrande) that achieves comparable evaluation capability while using less than 3% of the original items. The authors leverage psychometric techniques, particularly Item Response Theory (IRT), to identify the most informative test items and estimate latent abilities that can reconstruct original benchmark scores with high accuracy.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. Novel application of psychometric methods to LLM evaluation\\n2. Impressive compression ratio (<3% of original size) while maintaining accuracy. Low reconstruction error (1.24% RMSE for individual benchmarks, 0.58% for total score).\\n3. Comprehensive ablation studies and baseline comparisons. Thorough investigation of factor structure across benchmarks.\", \"weaknesses\": \"1. Memorization risks:\\n (1) Smaller benchmark size increases memorization vulnerability\\n (2) Proposed mitigation strategies need further validation\\n2. Theoretical Assumptions:\\n (1) IRT assumptions about LLMs need more justification\\n (2) Independence assumptions between models may be violated due to shared architectures/training data\", \"questions\": \"1. Could alternative item selection methods (beyond Fisher information) yield better results?\\n2. How stable are the results across different random seeds and model subsets?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces metabench, a sparse benchmark distilled from six prominent benchmarks (ARC, GSM8K, HellaSwag, MMLU, TruthfulQA, and WinoGrande). Simple criteria, cross-validated subsampling, and information-based filtering are used to reduce the size of the benchmark. Original scores are reconstructed in a cross-validated manner.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. This paper distills six prominent LLM benchmarks into a much smaller one with less than 3% of the size, which enables more streamlined and cost-effective evaluation methods;\\n2. The new sparse benchmark yields estimators able to reconstruct the original benchmark score.\", \"weaknesses\": \"As mentioned in the limitations section, a smaller benchmark has the risk of being memorized.\", \"questions\": \"Will a small benchmark lead to a large variance in evaluation?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"[Part 2/2]\\n\\n> Could alternative item selection methods (beyond Fisher information) yield better results? \\n\\nThank you for suggesting this additional experiment consolidating our methodological choices. We will add the following contents to the appendix. The corresponding code is contained in \\u201cclustering.R\\u201d and \\u201cevaluate.clust.R\\u201d in the Supplementary Material.\\n\\nAn alternative item selection strategy presented in previous literature (e.g. [7]) is based on clustering. We altered only the item selection method and kept the remaining analysis steps constant for maximum comparability with our previous results: For this, we took the IRT-based item parameters from the 350-item fits for each benchmark and performed k-means clustering on them. Iteratively, one item was drawn from each remaining cluster, until the total number of items equaled the number of items in our Fisher-information-based subsets. Then we re-fitted the same IRT model on that subset of items, estimated the latent ability per LLM and fitted a GAM of the original score based on the latent ability. We performed a grid search over the hyperparameters k in [10, 15, 20, 25, 30], IRT model type m in [2PL, 3PL, 4PL] and latent ability estimation method tau in [MAP, EAPsum] on a validation set. For comparability with our final selection, we then took the item selection with the best validation RMSE for each benchmark and fit a joint GAM (cf. appendix B.5) on the same test set as used for our main results. These are the test RMSEs:\\n\\n| _Benchmark_ | ARC | GSM8K | HellaSwag | MMLU | TruthfulQA | WinoGrande | Total Score |\\n|---------|---------|---------|-----------|--------|------------|------------|-------------|\\n| _Test RMSE_| 1.334 | 1.748 | 1.697 | 1.805 | 1.293 | 2.494 | 0.775 |\\n\\nEach test RMSE is substantially worse than its counterpart with item selection based on Fisher-information (s. Paper Figure 1). Note that since we kept the remaining variables in the processing pipeline constant, this highlights the merit of using Fisher-information functions for item selection.\\n\\n> How stable are the results across different random seeds and model subsets?\\n\\nWe acknowledge our oversight in not conducting this analysis sooner and thank the reviewer for the pointer. The following results will get a separate section in the appendix. We ran the entire benchmark distillation procedure using 5 different random seeds, which affect the dataset partitioning into training, validation and test sets, as well as cross-validated subsampling. Please find the results in rebuttals Figure 1 in the Supplementary Material. Test set RMSEs and MAEs show little variation, but subsets for ARC and MMLU are slightly less stable than for the other benchmarks or the total score over benchmarks. Rank correlations are always over r = 0.95, and only MMLU and WinoGrande do not border on r = 1.0. Apart from two outliers of ~50 items, the converged benchmark sizes are largely stable as well. Overall, the results seem largely independent of the chosen random seed.\\n\\n---\\n\\n__References__\\n\\n[1] Paul Irwing, Tom Booth, and David J. Hughes (eds.). The Wiley Handbook of Psychometric Testing: A Multidisciplinary Reference on Survey, Scale and Test Development. Wiley, 1 edition, April\\n2018. ISBN 978-1-118-48983-3 978-1-118-48977-2. doi: 10.1002/9781118489772\\n\\n[2] Fernando Mart\\u00ednez-Plumed, Ricardo BC Prud\\u00eancio, Adolfo Mart\\u00ednez-Us\\u00f3, and Jos\\u00e9 Hern\\u00e1ndez-\\nOrallo. Making sense of item response theory in machine learning. In ECAI 2016, pp. 1140\\u20131148.\\nIOS Press, 2016.\\n\\n[3] Fernando Mart\\u00ednez-Plumed, Ricardo B.C. Prud\\u00eancio, Adolfo Mart\\u00ednez-Us\\u00f3, and Jos\\u00e9 Hern\\u00e1ndez-\\nOrallo. Item response theory in AI: Analysing machine learning classifiers at the instance level.\\nArtificial Intelligence, 271:18\\u201342, June 2019. ISSN 00043702. doi: 10.1016/j.artint.2018.09.004.\\n\\n[4] Clara Vania, Phu Mon Htut, William Huang, Dhara Mungra, Richard Yuanzhe Pang, Jason Phang,\\nHaokun Liu, Kyunghyun Cho, and Samuel R. Bowman. Comparing Test Sets with Item Response\\nTheory, June 2021.\\n\\n[5] Xiting Wang, Liming Jiang, Jose Hernandez-Orallo, David Stillwell, Luning Sun, Fang Luo, and\\nXing Xie. Evaluating General-Purpose AI with Psychometrics, December 2023a.\\n\\n[6] Hern\\u00e1ndez-Orallo, J. (2017). Evaluation in artificial intelligence: from task-oriented to ability-oriented measurement. Artificial Intelligence Review, 48, 397-447.\\n\\n[7] Felipe Maia Polo, Lucas Weber, Leshem Choshen, Yuekai Sun, Gongjun Xu, and Mikhail Yurochkin. tinyBenchmarks: Evaluating LLMs with fewer examples, February 2024.\"}", "{\"title\": \"Response to Authors\", \"comment\": \"I thank the authors for their response and the substantial revisions to the paper. I especially appreciate two of the major additions which (1) test whether any improvement comes from adding a second latent ability for the set of 6 benchmarks considering in the paper and (2) explore through simulation the effects of having two latent abilities and two benchmarks with varying relations to the latent abilities. While I remain interested in seeing an expanded family of benchmarks for which a second latent does provide an improvement, I view this as a substantial expansion to the work and thus beyond the scope of the paper. Based on the improvements made by the authors, I have raised my score.\"}", "{\"comment\": \"[Part 2/2]\\n\\n> Would it be a useful preprocessing step to subtract out random chance from the score and renormalize? E.g. take (score - 0.25) / (1 - 0.25).\\n\\nThank you for this suggestion. We have given this careful thought and came to the following conclusions: It is better not to normalize for guess-rate to keep normalized scores comparable across studies. In a few LLMs this would in fact lead to negative normalized scores, especially in GSM8K. \\nAs score normalization does not alter the single responses, IRT fits would remain unaffected. Finally, since this normalization procedure is a fixed affine transform, it can be perfectly adapted to by the weights of a regression model (like the GAMs we use). It would therefore not affect score reconstruction performance either.\\n\\n> Also I would be curious if a larger fraction of the test items is required to reconstruct the scores when the set of included benchmarks require multiple latent ability dimensions to represent.\", \"this_is_a_nuanced_question\": \"In order to answer this, we conducted a proof of concept simulation study, in which we repeatedly simulated two latent abilities for 500 subjects and two benchmarks of 100 items. The first benchmark measures both abilities to a varying degree, and the second test only measures the second ability. We separately varied the correlation between both latent abilities and the extent to which the first benchmark depends on both latent abilities (more details are contained under Rebuttals Figure 2 in the Supplementary Material). In summary, if a test requires multiple latent abilities and they are not strongly correlated, score recovery benefits from adding multiple latent abilities as regression predictors. Therefore (assuming independent loading parameters over dimensions) one needs enough items to estimate each of the required latent abilities, overall raising the number of required items. So your hunch is probably correct!\"}" ] }
4Sv5MQ931E
MSR-ViR: Modularized Self-reflected Video Reasoner for Video Question Answering
[ "Zihan Song", "Zi Qian", "Xin Wang", "Hong Chen", "Yaofei Wu", "Longtao Huang", "Hui Xue'", "Wenwu Zhu" ]
Recently, multimodal large language models (multimodal LLMs) have been applied to a wide range of video understanding tasks, particularly for Video Question Answering (VideoQA). However, existing multimodal LLMs suffer from the following challenge: the classic end-to-end training strategies of multimodal LLMs for VideoQA tasks are black-box, thus lacking interpretability as they can neither present a reasoning path nor indicate where the answer is derived from the video. To tackle this challenge, we propose MSR-ViR (Modularized Self-Reflected Video Reasoner), a self-reflected framework that introduces a Modularized Spatial-Temporal Grounding (MoST-Grounding) module to multimodal LLMs for VideoQA tasks. MoST-Grounding utilizes a question parser LLM to generate execution policies, which serve as a reasoning path from questions to answers providing interpretability for our VideoQA framework. Based on the execution policies, MoST-Grounding invokes various small modules to localize temporal segments and spatial regions in videos which provide multimodal LLMs with most relevant visual information, while presenting visual evidence of our final answers. To avoid the question parser LLM generating unreasonable policies, we further propose a reinforcement learning-based Alternate Self-reflection training strategy to optimize the Multimodal LLM and the question parser LLM. Experiments on VideoQA datasets (NExT-QA and STAR) and grounded VideoQA dataset (NExT-GQA) demonstrate that our method significantly improves video understanding capabilities of multimodal LLMs, while providing interpretable reasoning paths together with temporal and spatial localization evidence within the video.
[ "Video Question Answering", "Multimodal LLM", "Modular Network", "Self-reflected Training" ]
Reject
https://openreview.net/pdf?id=4Sv5MQ931E
https://openreview.net/forum?id=4Sv5MQ931E
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xhmvYoUiBG", "tyCL1QveWn", "sZSjiM3FZZ", "naHFMXD9Vs", "mq5vRdgu3H", "hC6nRM6Hib", "as8H5LHoYT", "VhEyJqlHKj", "UlyRO36SuS", "SyPBhNnRLz", "SbPnfXISJM", "RGlI4Hq2ee", "NkYQcqvKiw", "L4te29y1xh", "Jo4QX0c98j", "GUFOkjSeQm", "FLXsVzgKaX", "DgSORJb6Fq", "C9b1sKf9u3", "Ba0QgVokLh", "Agmgkvpwxa", "7j0gowGbfD", "4EUIDleEg4", "0movVuVI0x" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "meta_review", "official_review", "official_comment", "official_comment", "official_review", "official_comment" ], "note_created": [ 1737524239979, 1732865363404, 1733033734772, 1732712337394, 1732711325744, 1732454761385, 1732868023569, 1732454359388, 1732609404386, 1732711287557, 1733113186403, 1733033647994, 1730171354764, 1732453652010, 1732713301007, 1732711347446, 1732454498726, 1730476065507, 1734844676052, 1729416768401, 1733131417573, 1732711369969, 1732032927706, 1732453990735 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission13177/Reviewer_1xgn" ], [ "ICLR.cc/2025/Conference/Submission13177/Authors" ], [ "ICLR.cc/2025/Conference/Submission13177/Reviewer_P6dF" ], [ "ICLR.cc/2025/Conference/Submission13177/Authors" ], [ "ICLR.cc/2025/Conference/Submission13177/Authors" ], [ "ICLR.cc/2025/Conference/Submission13177/Authors" ], [ "ICLR.cc/2025/Conference/Submission13177/Authors" ], [ "ICLR.cc/2025/Conference/Submission13177/Authors" ], [ "ICLR.cc/2025/Conference/Submission13177/Authors" ], [ "ICLR.cc/2025/Conference/Submission13177/Reviewer_ndrP" ], [ "ICLR.cc/2025/Conference/Submission13177/Authors" ], [ "ICLR.cc/2025/Conference/Submission13177/Reviewer_P6dF" ], [ "ICLR.cc/2025/Conference/Submission13177/Authors" ], [ "ICLR.cc/2025/Conference/Submission13177/Authors" ], [ "ICLR.cc/2025/Conference/Submission13177/Authors" ], [ "ICLR.cc/2025/Conference/Submission13177/Authors" ], [ "ICLR.cc/2025/Conference/Submission13177/Reviewer_ndrP" ], [ "ICLR.cc/2025/Conference/Submission13177/Area_Chair_N8Yd" ], [ "ICLR.cc/2025/Conference/Submission13177/Reviewer_1xgn" ], [ "ICLR.cc/2025/Conference/Submission13177/Authors" ], [ "ICLR.cc/2025/Conference/Submission13177/Authors" ], [ "ICLR.cc/2025/Conference/Submission13177/Reviewer_gPWW" ], [ "ICLR.cc/2025/Conference/Submission13177/Authors" ] ], "structured_content_str": [ "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"Thanks for your efforts in addressing my concerns, and I hope the authors will add these experiments and analyses to the final version to make it more comprehensive for readers. In conclusion, I will raise my score to 6.\"}", "{\"title\": \"A gentle reminder\", \"comment\": \"Dear reviewer, thank you again for the review and the insightful feedback. We hope that our response and the revised paper have addressed your concerns. As the discussion stage is about to end, we would greatly appreciate your feedback and please feel free to let us know if you have any other questions.\"}", "{\"comment\": \"Thank you for the responses. My concerns are addressed and I increase the score to 6.\"}", "{\"title\": \"A gentle reminder\", \"comment\": \"Dear reviewer, thank you again for the review and we hope that our response and the uploaded revised paper have addressed your concerns. We would greatly appreciate your feedback and please feel free to let us know if you have any other questions.\"}", "{\"title\": \"Responses to Reviewer gPWW\", \"comment\": \"We sincerely thank the reviewer for taking time to review our paper and providing thoughtful feedback and insightful suggestions. We address the weaknesses and questions as follows:\\n\\n**Weakness1**\\n\\nDue to the lack of datasets with policy annotations, we are unable to directly perform supervised training for the question parser. As such, using feedback from the Multimodal LLM as rewards to train the question parser via reinforcement learning is a relatively straightforward approach. Based on the results of the ablation study (Table 3) and specific examples (Figures 4, 9, and 10), it can be observed that reinforcement learning enables the question parser to generate more reasonable policies that better support locating spatio-temporal segments relevant to the question in the video, demonstrating the effectiveness of our RL-based Alternate Self-reflection Training Strategy. We will experiment on larger VideoQA datasets with more complex scenarios in future work to evaluate the scalability of our RL-based training strategy.\\n\\n**Weakness2**\\n\\nWe are sorry that, due to time constraints, we are unable to provide experimental results on the SOK-Bench and Complex-TV-QA datasets at this stage. Most questions in the Complex-TV-QA datasets focus on predicting future events based on the scenarios in the video, while the SOK-Bench includes many counterfactual reasoning questions. For these types of questions, grounding may not be helpful to improve Multimodal LLM to answer the question. As a result, they may not be suitable for evaluating our grounding-based framework MSR-ViR\\u2019s performance. It is worth noting that statistical analysis[1] shows that the average question length in the NExT-QA and STAR datasets is among the longest in commonly used VideoQA datasets, demonstrating the ability of our framework to handle long and complex questions effectively.\\n\\n**Weakness3**\\n\\nWe test some more grounding-based model (SeViLa and GCG) and compare the results with our Llava-Next-based MSR-ViR framework, which are shown in the table below. As seen, MSR-ViR framework surpasses SeViLa and GCG on NExT-QA dataset and SeViLa on STAR-sub dataset. To further demonstrate the effectiveness of our framework, we test the base Multimodal LLM Llava-Next on NExT-QA and STAR-sub. As seen in the table below, the VideoQA accuracy has been significantly improved by utilizing MSR-ViR framework(from 73.1 to 74.9 on NExT-QA, from 69.9 to 71.0 on STAR-sub). What's more, for Acc@GQA and grounding metrics, MSR-ViR surpasses SeViLa on NExT-GQA dataset, showing a stronger grounded-qa capability.\\n\\n| | NExT-QA Tem. | NExT-QA Cau. | NExT-QA Des. | NExT-QA Avg. | STAR-sub Int. | STAR-sub Seq. | STAR-sub Avg. |\\n| ------------------------------- | ------------ | ------------ | ------------ | ------------ | --------------- | ------------- | ------------- |\\n| SeViLa | 69.4 | 74.2 | **81.3** | 73.8 | 63.7 | 70.4 | 67.1 |\\n| GCG | **72.6** | 74.2 | 80.7 | 74.6 | - | - | - |\\n| Llava-Next | 69.5 | 73.3 | 79.7 | 73.1 | 67.6 | 72.1 | 69.9 |\\n| MSR-ViR(ours, Llava-Next-based) | 72.2 | **74.6** | 80.9 | **74.9** | **68.9** | **73.1** | **71.0** |\\n\\n| | Acc@GQA | mIoP | [email protected] | [email protected] | mIoU | [email protected] | [email protected] |\\n| ------------------------------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- |\\n| SeViLa | 16.6 | 29.5 | 34.7 | 22.9 | 21.7 | 29.2 | 13.8 |\\n| MSR-ViR(ours, Llava-Next-based) | **18.6** | **29.6** | **39.0** | **24.1** | **23.4** | **33.6** | **16.4** |\\n\\n**Weakness4**\\n\\nOur MSR-ViR framework is implemented based on the Swift[2] framework, and all experimental results are fully reproducible. We plan to open-source our code after the review and provide detailed documentation to facilitate the reproduction and application of our framework.\\n\\n[1] Fu, Chaoyou, et al. \\\"Video-mme: The first-ever comprehensive evaluation benchmark of multi-modal llms in video analysis.\\\" arXiv preprint arXiv:2405.21075 (2024).\\n\\n[2] Zhao, Yuze, et al. \\\"Swift: a scalable lightweight infrastructure for fine-tuning.\\\" *arXiv preprint arXiv:2408.05517* (2024).\"}", "{\"comment\": \"We sincerely thank the reviewer for the insightful feedback and suggestions. The analyses and experiments according to the reviewer's suggestions further improve the quality of our paper, which have been added to the revised version of the paper.\"}", "{\"title\": \"Responses to Reviewer 1xgn (Part I)\", \"comment\": \"We sincerely thank the reviewer for taking time to review our paper and providing thoughtful feedback and insightful suggestions. We address the weaknesses as follows:\\n\\n**Weakness1**\\n\\nThe MoST-Grounding module introduces a temporal grounding tool, UniVTG, and a spatial grounding tool, YOLO-World. These two models are relatively small compared to our base LLM, with UniVTG having 41.3M parameters and YOLO-World 48M parameters. Additionally, both models have fast inference speeds, so the MoST-Grounding module does not introduce significant computational costs. In fact, the primary source of computational overhead in our framework is the question parser based on Qwen2-7B, which is quite common in VideoQA tasks based on LLMs. Many reasoning-based VideoQA models(like MoReVQA[1], LLoVi[2] and VideoTree[3]) rely on LLMs, which inevitably introduce additional computational overhead.\\n\\nWhen selecting the small modules for the MoST-Grounding module, we consider not only the models\\u2019 ability to perform grounding tasks but also their operational efficiency, as highlighted by the reviewer. If the small modules were slow, they would substantially affect the framework's overall efficiency. The chosen UniVTG and YOLO-World models strike a balance between strong grounding capabilities and high efficiency.\\n\\nFollowing the reviewer\\u2019s suggestion, we replace UniVTG with two temporal grounding models, $R^2$-Tuning[4] and Moment-DETR[5], on the NExT-GQA dataset as a further ablation study. The results are presented in the table below. As shown, both models exhibit lower temporal grounding accuracy compared to UniVTG (especially in the IoP metric), leading to reduced VideoQA accuracy and Acc@GQA for the MSR-ViR framework.\\n\\n| | Acc@QA | Acc@GQA | mIoP | [email protected] | mIoU | [email protected] |\\n| ------------------------- | -------- | -------- | -------- | -------- | -------- | -------- |\\n| MSR-ViR (w/ UniVTG) | **69.9** | **18.5** | **30.0** | **25.0** | **22.8** | **16.4** |\\n| MSR-ViR (w/ $R^2$-Tuning) | 67.3 | 16.6 | 28.7 | 23.2 | 22.7 | 15.9 |\\n| MSR-ViR (w/ Moment-DETR) | 67.4 | 17.2 | 28.6 | 24.1 | 21.4 | 14.7 |\\n\\nSince spatial grounding must be performed for every temporal-grounded frame, the efficiency requirements for the spatial grounding model are even higher. YOLO-World\\u2019s exceptional efficiency makes it particularly well-suited to our needs, offering both high efficiency and high grounding accuracy\\u2014features that other open-vocabulary object detector models lack[6].\\n\\n[1] Min, Juhong, et al. \\\"MoReVQA: Exploring Modular Reasoning Models for Video Question Answering.\\\" *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*. 2024.\\n\\n[2] Zhang, Ce, et al. \\\"A simple llm framework for long-range video question-answering.\\\" *arXiv preprint arXiv:2312.17235* (2023).\\n\\n[3] Wang, Ziyang, et al. \\\"VideoTree: Adaptive Tree-based Video Representation for LLM Reasoning on Long Videos.\\\" *arXiv preprint arXiv:2405.19209* (2024).\\n\\n[4] Liu, Ye, et al. \\\"$ R^ 2$-Tuning: Efficient Image-to-Video Transfer Learning for Video Temporal Grounding.\\\" *arXiv preprint arXiv:2404.00801* (2024).\\n\\n[5] Lei, Jie, Tamara L. Berg, and Mohit Bansal. \\\"Detecting moments and highlights in videos via natural language queries.\\\" *Advances in Neural Information Processing Systems* 34 (2021): 11846-11858.\\n\\n[6] Cheng, Tianheng, et al. \\\"Yolo-world: Real-time open-vocabulary object detection.\\\" *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*. 2024.\"}", "{\"title\": \"Explanation of the Paper Revisions\", \"comment\": \"Thank all the reviewers for taking the time to review our paper and providing valuable and constructive feedback. Based on the reviewers' suggestions, we have made several modifications and additions to the paper. The main revisions are as follows:\\n\\n1. **Table 1**: We have added SeViLa and GCG as grounding-based baselines, along with the experimental results of our Llava-Next version of MSR-ViR.\\n2. **Table 2**: Results of SeViLa and our Llava-Next version of MSR-ViR have been added.\\n3. Based on the revision of Table 1 and Table 2, adjustments have been made to the experiments setup descriptions in Section 4.1, as well as the descriptions of the experimental results in Sections 4.2 and 4.3.\\n4. **Appendix**: A new Section A.6 has been added, which includes experimental results on model parameters and inference speed, as well as an ablation study on the selection of temporal grounding models.\"}", "{\"title\": \"A gentle reminder\", \"comment\": \"Dear reviewer, thank you again for the review and we hope that our response and the uploaded revised paper have addressed your concerns. We would greatly appreciate your feedback and please feel free to let us know if you have any other questions.\"}", "{\"title\": \"Response to the Authors\", \"comment\": \"Thank you for the reviewer\\u2019s response to my questions. The issues I was particularly concerned with, namely the novelty and inference time, have been addressed in detail:\", \"on_the_use_of_multiple_existing_models_in_msr_vir_for_ensembling\": \"The author cites MoReVQA as an example. Although I am generally not fond of model ensembling, it is indeed a common practice in current submissions, which has convinced me on this point.\", \"on_the_inference_time_overhead_of_msr_vir\": \"The table provided by the author shows that their inference speed is 6 times slower than QWen-VL. I believe that using self-reflection might further slow it down. While the author compares the inference speed with models like VideoTree and LLoVi, both of which are designed for long videos and thus present more challenging tasks, these models are not directly comparable (even though the author evaluates their model on NextQA, the lack of evaluation on longer datasets such as EgoSchema makes it unsuitable as a benchmark for long video LLMs). Furthermore, the author incurs a 6x time complexity increase but only shows a marginal improvement over QWen-VL in Table 1, making this inference time overhead intolerable.\\n\\nTherefore, I have decided to lower the score.\"}", "{\"title\": \"A gentle reminder\", \"comment\": \"Dear reviewer, thank you again for the review and the insightful feedback. We hope that our response and the revised paper have addressed your concerns. As the discussion stage is about to end, we would greatly appreciate your feedback and please feel free to let us know if you have any other questions.\"}", "{\"summary\": \"This paper aims to improve the interpretability of Multimodal LLMs in performing VideoQA. To achieve the goal, the authors design a modular self-reflection framework MSR-ViR. The framework primarily comprises a spatial-temporal grounding module and a self-refection learning mechanism based on DPO. MSR-ViR basically decouples video grounding from videoqa, enabling the interpretaion of intermediate results to understand the answers. The experiments on related datasets have demonstrated the strength of the approach in both accuracy and interpretability (grounded accuracy).\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1.\\tSuccessfully develop a ground-then-answer framework for interpretable video question parsing. The question parser policy is able to be optimized via answer feedback.\\n\\n2.\\tThe approach is well presented and easy to understand.\", \"weaknesses\": \"1.\\tThe paper does not improve video grounding but just uses existing method UniVTG. According to table 2, the grounding performance in terms of [email protected] is worse than previous VLMs (VGT and TempCLIP). This severely limits the improvements of QA performance.\\n2.\\tAccording to the model ablation results in Table 3, the global representation g_v (which opens back door for grounded QA) seems more crucial than other components. Such results slightly depart from the major claim of interpretable VQA where correct answers are anchored on correct visual content.\\n3.\\tShould compare with SeViLA which also finetunes a localizer on QVHighlight (like UniVTG) for grounded VideoQA.\", \"questions\": \"see weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Responses to Reviewer ndrP\", \"comment\": \"We sincerely thank the reviewer for taking time to review our paper and providing thoughtful feedback and insightful suggestions. We address the weaknesses and questions as follows:\\n\\n**Weakness 1**\\n\\nOur MSR-ViR framework employs Qwen2-7B as the question parser and Qwen-VL as the multimodal answerer, both of which are based on LLM architectures. However, many recent works utilize LLMs to tackle VideoQA tasks. For instance, MoReVQA[1] leverages Palm-2, which is also a large language model, for four times in order to reason and get the answer to one question. Reasoning tasks often require step-by-step reasoning to derive multiple intermediate results before arriving at the final answer, which distinguishes them from end-to-end models that directly predict the final answer. This difference also explains why reasoning-based approaches typically cost greater computational overhead.\\n\\n**Weakness 2**\\n\\nTo enhance the interpretability of the VideoQA task, the MoST-Grounding module employs UniVTG as the temporal grounding model and YOLO-World as the spatial grounding model, which indeed introduces additional parameters. However, both grounding models have relatively small parameter sizes: UniVTG has 41.3M parameters, and YOLO-World has 48M parameters, which are only a tiny fraction of the 7B parameters in LLMs. Therefore, the MoST-Grounding module does not significantly increase the overall parameter count of the framework.\\n\\nThe MoST-Grounding (Modularized Spatial-Temporal Grounding) module serves to localize the original question temporally and spatially within the video, providing the Multimodal LLM with the most relevant information to the question. We sincerely appreciate the reviewer\\u2019s valuable suggestion, as it is true that the current modular network employs only one temporal grounding model and one spatial grounding model. While experiments have already demonstrated satisfactory localization performance (see Table 2), incorporating more modules into the modular network to enhance its functionality could potentially further improve localization and answer accuracy. We will try to introduce more modules into our framework in future work.\\n\\n**Weakness 3**\\n\\nBased on the reviewer\\u2019s suggestion, we provide experimental results on inference speed and parameter size as follows:\\n\\n| | Parameter Size | Inference Speed |\\n| ------------- | ---------------------------------------- | ----------------- |\\n| Qwen-VL | 9.6B | 1.29 qa pairs / s |\\n| MSR-ViR(ours) | qwen2(7B) + qwen-vl(9.6B) + yolo-world(48M) + univtg(41.3M) = 16.6B | 0.21 qa pairs / s |\\n\\nThe inference speed results were tested on two NVIDIA A100 GPUs. As shown, MSR-ViR introduces a larger parameter size compared to end-to-end Multimodal LLM. Additionally, since MSR-ViR requires an LLM to first generate a policy, followed by the MoST-Grounding modular network for grounding, and finally the multimodal answerer to provide the answer, its inference speed is noticeably slower than that of end-to-end Multimodal LLM. \\n\\nActually, the additional time cost is normal for reasoning tasks. As tested and reported in [2], the inference speed of VideoTree is about 0.13 qa pairs / s, while the inference speed of LLoVi[3] is about 0.26 qa pairs / s. Similarly, GPT-O1 takes significantly longer time to answer questions than GPT-4o because it performs detailed reasoning process while generating responses.\\n\\n**Question 1**\\n\\nWe sincerely thank the reviewer for providing this insightful suggestion! In our current work, we leverage feedback from the Multimodal LLM as a reward to guide the training of the question parser through reinforcement learning. As shown in the ablation study in Table 3, this reinforcement learning approach improves the ability of the question parser to generate reasonable policies. However, reinforcement learning training may not be as stable or effective as direct supervised learning. If a Chain-of-Thought dataset could be constructed to enable supervised training of the question parser, it might further enhance its policy generation capability, thereby improving the overall performance of our framework. We sincerely apologize that, due to time constraints, we are currently unable to construct a high-quality CoT dataset and conduct training. We will explore this direction in our future work.\\n\\n[1] Min, Juhong, et al. \\\"MoReVQA: Exploring Modular Reasoning Models for Video Question Answering.\\\" *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*. 2024.\\n\\n[2] Wang, Ziyang, et al. \\\"VideoTree: Adaptive Tree-based Video Representation for LLM Reasoning on Long Videos.\\\" *arXiv preprint arXiv:2405.19209* (2024).\\n\\n[3] Zhang, Ce, et al. \\\"A simple llm framework for long-range video question-answering.\\\" *arXiv preprint arXiv:2312.17235* (2023).\"}", "{\"comment\": \"We sincerely thank the reviewer for the response and feedback. The insightful and constructive suggestions further improve the quality of our paper.\"}", "{\"title\": \"A gentle reminder\", \"comment\": \"Dear reviewer, thank you again for the review and we hope that our response and the uploaded revised paper have addressed your concerns. We would greatly appreciate your feedback and please feel free to let us know if you have any other questions.\"}", "{\"title\": \"Responses to Reviewer 1xgn (Part II)\", \"comment\": \"**Weakness2**\\n\\nIn our work, the interpretability we emphasize refers to clearly demonstrating how, step by step, the relevant temporal segments are located from the video based on a given question, and how the spatial regions relevant to the question are then inferred from the frames extracted from these temporal segments. The interpretability mentioned by the reviewer refers to the process of step-by-step reasoning from visual information and the question to the final answer. These represent two different definitions of interpretability and can also be seen as two essential steps in completing the VideoQA task: first, locating the temporal and spatial segments, and then reasoning the answer based on the localized content. Our work focuses on the former. In future work, we will incorporating the internal reasoning process of the LLM.\\n\\n**Weakness3**\\n\\nBased on the reviewer\\u2019s suggestion, we introduced SeViLa\\u2019s test results on NExT-QA, STAR, and NExT-GQA, as well as GCG\\u2019s test results on NExT-QA for comparison with the Llava-Next version of our MSR-ViR framework. The results, presented in the table below, demonstrate a significant improvement in VideoQA accuracy, surpassing existing grounding-based methods such as SeViLa and GCG.\\n\\n| | NExT-QA Tem. | NExT-QA Cau. | NExT-QA Des. | NExT-QA Avg. | STAR-sub Int. | STAR-sub Seq. | STAR-sub Avg. |\\n| ------------------------------- | ------------ | ------------ | ------------ | ------------ | --------------- | ------------- | ------------- |\\n| SeViLa | 69.4 | 74.2 | **81.3** | 73.8 | 63.7 | 70.4 | 67.1 |\\n| GCG | **72.6** | 74.2 | 80.7 | 74.6 | - | - | - |\\n| MSR-ViR(ours, Llava-Next-based) | 72.2 | **74.6** | 80.9 | **74.9** | **68.9** | **73.1** | **71.0** |\\n\\n| | Acc@GQA | mIoP | [email protected] | [email protected] | mIoU | [email protected] | [email protected] |\\n| ------------------------------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- |\\n| SeViLa | 16.6 | 29.5 | 34.7 | 22.9 | 21.7 | 29.2 | 13.8 |\\n| MSR-ViR(ours, Llava-Next-based) | **18.6** | **29.6** | **39.0** | **24.1** | **23.4** | **33.6** | **16.4** |\"}", "{\"summary\": \"The paper addresses the use of multimodal Large Language Models in understanding tasks across various multimodal scenarios, specifically focusing on applications in Video Question Answering. However, current Multimodal Large Language Models are largely black-box systems for VideoQA tasks, lacking the ability to provide an understandable reasoning path and, thus, suffer from limited interpretability. To address this, the authors propose MSR-ViR, which constructs a modular reasoning structure designed to generate reasoning paths, and incorporates a reinforcement learning framework to prevent the model from generating unreasonable reasoning paths.\\n\\nWhile the proposed approach is interesting, it relies on the integration of four existing models. This ensemble-based structure shows only marginal performance improvements (1-2%), and the manuscript does not discuss crucial aspects such as reasoning time costs or memory overhead.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The MoST-Grounding module introduces two standard modules\\u2014temporal localizer and spatial localizer\\u2014which can be flexibly assembled in sequence based on the question parser. This structure is robust and allows for reliable generation of reasoning paths.\\n\\n2. The authors present a clear motivation: to create a framework for generating reasoning paths for the black-box nature of VideoQA tasks. The comprehensive visualization of reasoning paths further demonstrates the effectiveness of the model.\", \"weaknesses\": \"1. The question parser and LLM reasoning components both rely on LLM structures, leading to high computational costs.\\n\\n2. Both the temporal localizer and spatial localizer use existing models, specifically UniVTG and YOLO-World, which contribute to significant parameter overhead. As the complexity of VideoQA tasks increases, relying on these two models alone may not only limit predictive accuracy but also compromise the completeness of reasoning paths. Future work may need to explore additional modules to support diverse combinations (see [1] for reference).\\n\\n3. The ablation study lacks comprehensiveness. While the authors assess model performance on QA and grounding tasks and provide an effectiveness analysis of each module, they do not evaluate inference speed, parameter count, or other metrics compared to end-to-end models. Given that the proposed framework integrates multiple existing large models, an analysis of inference speed is both important and currently missing.\\n\\n[1]. Neural Module Networks\", \"questions\": \"To generate reasoning paths for VideoQA, would it be more effective to design a Chain-of-Thought dataset and perform supervised fine-tuning (SFT)? The O1 model currently adopts this approach, achieving clear reasoning paths through an end-to-end structure.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"None\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This paper proposes a modular system for video quesiton-answering (VideoQA). The motivation for a modular framework is that it can be both more intepretable (by providing a reasoning trace) and accurate than a single-black box model. The authors employ a parsing LLM to generate programs from the input question, separate temporal- and spatial-grounding modules, and an answering LLM as well. The authors use DPO to train the parsing LLM, as an alternative to few-shot prompting of the LLM as used in prior works such as MoreVQA.\\n\\nReviewers appreciated the motivation of the approach, and the reasoning traces that can be produced by the system. Concerns were that the proposed approaches substantially increases the computaitonal cost (it takes about 6x longer to answer a quesiton), and the accuracy improvements do not justify the computational cost of the approach. Reviewer gPWW also pointed out that the authors have not evaluated their approach on more complex Video QA datasets where reasoning traces could be more useful.\", \"the_ac_agrees_that_the_experimental_evaluation_of_the_paper_is_weak\": \"the datasets used by the paper (Next-QA, Next-GQA and STAR) are all outdated and created before LLMs started being widely used. And there are also recent VideoQA datasets which contain substantially longer and complex videos (such as Video-MME (which the authors cited in the rebuttal) and LVBench) which would be more suitable as they are more challenging, require more reasoning and grounding, and are not saturated like Next-QA, as they were created with (M)LLMs in mind. And although the authors have mentioned in their paper and rebuttal that a focus of the paper is improving interpretability of VideoQA models, the experiments in the paper mostly focus on accuracy. The ablations in the paper (Table 3; top part) only consider accuracy too.\\n\\nThe final decision is therefore to reject this paper. Authors are encouraged to improve the experimental evaluation, and to resubmit a revised version of this paper to a subsequent conference.\", \"additional_comments_on_reviewer_discussion\": \"Please see above. Reviewer concerns were that the proposed approaches substantially increases the computaitonal cost (it takes about 6x longer to answer a quesiton), and the accuracy improvements do not justify the computational cost of the approach. Reviewer gPWW also pointed out that the authors have not evaluated their approach on more complex Video QA datasets where reasoning traces could be more useful.\"}", "{\"summary\": \"This paper addresses the black-box problem of multimodal large language models (MLLMs) in VideoQA by proposing MSR-ViR, a novel framework. MSR-ViR introduces two core components: (1) the MoST-Grounding module, which localizes relevant temporal segments and spatial regions in videos, and (2) an Alternate Self-Reflection Training Strategy, which iteratively enhances the question parser and the MLLM. Evaluations on datasets such as NExT-QA, STAR, and NExT-GQA demonstrate that MSR-ViR achieves competitive performance on VideoQA tasks and improves interpretability by providing visual evidence for answers.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1) The paper is well written and the motivation is clear, with a strong focus on the interpretability challenge in VideoQA, making it highly relevant to the field.\\n2) The MoST-Grounding module integrates multiple submodules to effectively localize temporal and spatial regions, improving transparency in the input to the MLLM.\\n3) The Alternate Self-Reflection strategy introduces a novel reinforcement-based method to align the question parser and the MLLM, enhancing performance and consistency.\", \"weaknesses\": \"1) The framework is relatively heavy, relying on multiple external tools for tasks and additional operations such as resizing and truncation, which increases computational overhead. Moreover, what if these external tools are unreliable, which can lead to further exposure bias? It's necessary to further investigate the choice of the sub-modules in the MoST-Grounding module.\\n2) While the approach improves the selection of input information, it does not make the internal reasoning process of the MLLM more interpretable. It still focuses on the process of 'input' to decide which information should be fed into the MLLM as soft prompts. \\n3) The paper misses references with related works such as SeViLa [1] and GCG [2], which also focus on VideoQA with grounding elements. Including these baselines would strengthen the empirical validation.\\n\\n[1] Yu et al. \\\"Self-Chained Image-Language Model for Video Localization and Question Answering\\\", 2023 NIPS\\n[2] Wang et al. \\\"Weakly Supervised Gaussian Contrastive Grounding with Large Multimodal Models for Video Question Answering\\\", 2024 ACM MM\", \"questions\": \"See Weaknesses\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Further Response to Reviewer ndrP\", \"comment\": \"We thank the reviewer again for the further feedback on our responses. Below, we provide further responses to the reviewer\\u2019s concerns:\\n\\n**1. Regarding the concern about inference speed:**\\nWe would like to emphasize again that reasoning-based inference and end-to-end inference are fundamentally different. It is entirely normal and common for reasoning-based inference to be significantly slower than end-to-end inference, as it not only provides answers but also generates step-by-step reasoning paths to deliver more accurate and interpretable responses.\\n\\nStill taking MoReVQA[1] as an example, this work shares similarities with ours, solving reasoning-based VideoQA tasks. Its framework includes four LLMs, two additional vision-language models and a video captioning model, with the reasoning process spanning four stages. Clearly, the computational cost of the MoReVQA framework is far greater than that of our MSR-ViR framework. Although the MoReVQA paper does not report inference speed and the code is not open-sourced, it is foreseeable that its inference speed is not significantly faster than our framework. Compared to its direct baseline JCEF, MoReVQA introduces considerable computational complexity and time overhead for its multi-stage reasoning process. However, it achieves a 2.5% accuracy improvement on NExT-QA, which is comparable to our MSR-ViR framework(see the table below). Just like MoReVQA and our MSR-ViR, most reasoning methods share similar characteristics, with increased computational costs compared to their base models. Contributions of reasoning methods extend beyond improving question-answering accuracy. Providing interpretable reasoning paths(and also evidence of answers in videos like MSR-ViR) is also a critical contribution, which inevitably brings greater computational overhead.\\n\\nIt is worth noting that grounding-based methods also often have slower inference speeds. For example, SeViLa[2] has an inference speed of only 0.30 qa pairs / s(as mentioned in SeViLa paper's appendix), which is comparable to the inference speed of our MSR-ViR framework. Note that SeViLa is not designed for long-form videos.\\n\\n**2. Regarding the concern that the accuracy improvement of MSR-ViR on VideoQA datasets is marginal:**\\nOn the STAR-sub dataset, MSR-ViR(Qwen-VL-based) achieved a 3.4% improvement over its baseline (63.0 \\u2192 66.4). Specifically, on the Interaction and Sequence subsets, where temporal and spatial relationships are critical, it improved by 4.4% and 2.5%, respectively. We believe this significant accuracy improvement demonstrates the effectiveness of the MSR-ViR framework and is far from being marginal.\\n\\nOn the NExT-QA dataset, as shown in the table below, we compared the performance improvement of our method and other grounding-based methods and modular methods relative to their respective base models. For example:\\n\\n- SeViLa improved by 1.2% over its base model BLIP-2$^{\\\\text{concat}}$.\\n- MoReVQA improved by 2.5% over its direct baseline JCEF.\\n- Our MSR-ViR(Qwen-VL-based) improved by 1.7% over its base model Qwen-VL.\\n- Our MSR-ViR(Llava-Next-based) improved by 1.8% over its base model Llava-Next.\\n\\nThe improvements achieved by our method are comparable to those of existing grounding-based methods and modular methods, which we believe are not marginal.\\n\\n| | NExT-QA Tem. | NExT-QA Cau. | NExT-QA Des. | NExT-QA Avg. |\\n| ------------------------------- | ------------ | ------------ | ------------ | ------------ |\\n| BLIP-2$^{\\\\text{concat}}$ | 68.1 | 72.9 | 81.2 | 72.6 |\\n| SeViLa | 69.4(+1.3) | 74.2(+1.3) | 81.3(+0.1) | 73.8(+1.2) |\\n| JCEF | 61.6 | 68.3 | - | 66.7 |\\n| MoReVQA | 64.6(+3) | 70.2(+1.9) | - | 69.2(+2.5) |\\n| Qwen-VL | 68.4 | 71.3 | 80.6 | 71.9 |\\n| MSR-ViR(ours, Qwen-VL-based) | 69.9(+1.5) | 73.4(+2.1) | 81.5(+0.9) | 73.6(+1.7) |\\n| Llava-Next | 69.5 | 73.3 | 79.7 | 73.1 |\\n| MSR-ViR(ours, Llava-Next-based) | 72.2(+2.7) | 74.6(+1.3) | 80.9(+1.2) | 74.9(+1.8) |\\n\\n[1] Min, Juhong, et al. \\\"MoReVQA: Exploring Modular Reasoning Models for Video Question Answering.\\\" *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*. 2024.\\n\\n[2] Yu, Shoubin, et al. \\\"Self-chained image-language model for video localization and question answering.\\\" *Advances in Neural Information Processing Systems* 36 (2024).\"}", "{\"title\": \"A gentle reminder\", \"comment\": \"Dear reviewer, thank you again for the review and we hope that our response and the uploaded revised paper have addressed your concerns. We would greatly appreciate your feedback and please feel free to let us know if you have any other questions.\"}", "{\"summary\": \"This paper presents MSR-ViR (Modularized Self-Reflected Video Reasoner), a framework for Video Question Answering (VideoQA) that enhances interpretability by integrating multimodal large language models (LLMs) with spatial-temporal grounding and self-reflective training. Traditional multimodal LLMs struggle with interpretability, as they operate as black-box systems without revealing the reasoning process or the video segments informing their answers. MSR-ViR addresses this by using a MoST-Grounding module to localize relevant video segments and spatial regions based on policies generated by a question-parsing LLM, creating a clear reasoning path. To refine this process, the framework employs an Alternate Self-reflection Training Strategy, which jointly optimizes the multimodal LLM and the question parser through reinforcement learning, enabling mutual refinement based on feedback. Evaluations on popular VideoQA datasets (NExT-QA, STAR, and NExT-GQA) show that MSR-ViR surpasses traditional and grounding-based methods, demonstrating improved accuracy and the ability to localize relevant video segments, thereby providing visually grounded evidence for its answers.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The paper presents several strengths in addressing Video Question Answering (VideoQA) tasks. Primarily, it enhances the interpretability of multimodal large language models (LLMs), which traditionally function as black-box systems. This framework, with its MoST-Grounding module, identifies relevant video segments and spatial regions, aligning them with text inputs to support answer derivation. The method also uses reinforcement learning to train the multimodal LLM and question parser LLM in tandem, represents an innovative approach that bolsters model transparency and clarity.\\n\\nThe paper summarized grounded and modular videoQA methods, which aim to answer questions while identifying relevant regions in the video, have seen notable advancements through the integration of multimodal large language models (LLMs). \\n\\nThe paper\\u2019s extensive experimental validation on datasets like NExT-QA and STAR showcases the method\\u2019s superior performance and its ability to provide visually-grounded evidence, setting it apart from existing models. This combination of interpretability, strategic training, and improved accuracy underscores the paper\\u2019s significant contributions to advancing VideoQA methodologies.\", \"weaknesses\": \"The use of reinforcement learning (RL) in the paper, while different in facilitating the Alternate Self-reflection Training Strategy, has some limitations concerning novelty and implementation. RL is well-established for optimizing policies in non-differentiable tasks, and its application to train LLMs collaboratively is not entirely unprecedented, as other multimodal and modular frameworks have explored similar strategies. Additionally, the reinforcement learning process depends heavily on the quality of intermediate feedback provided by the multimodal LLM, which may propagate errors if the initial predictions are suboptimal. Furthermore, RL\\u2019s computational overhead and potential convergence issues in complex scenarios are not fully addressed, leaving questions about its scalability for larger datasets or more intricate VideoQA tasks.\\n\\nThe paper claims the method is able to deal with complex tasks. Will you consider to add or discuss several recent video question answering tasks for comprehensive evaluations? (e.g., SOK-Bench: A Situated Video Reasoning Benchmark with Aligned Open-World Knowledge, Complex-TV-QA: A Study of Situational Reasoning for Traffic Understanding, etc.)\\n\\nThe experimental improvements reported in the paper, while demonstrating the effectiveness of the proposed method, appear limited when compared to other state-of-the-art methods, as highlighted in Table 2. Although the MSR-ViR framework outperforms baseline and grounding-based approaches, the margin of improvement is relatively modest, raising concerns about the practical significance of the gains. \\n\\nA concern regarding the paper is the absence of clear information about the availability of open-source code for review. It is crucial for verifying the implementation details, replicating experimental results, and evaluating the broader applicability of the proposed methods.\", \"questions\": \"NA\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Responses to Reviewer P6dF\", \"comment\": \"We sincerely thank the reviewer for taking time to review our paper and providing thoughtful feedback and insightful suggestions. We address the weaknesses as follows:\\n\\n**Weakness1**\\n\\nWe use the UniVTG model, pretrained on several temporal grounding datasets, as the temporal localization tool in the MoST-Grounding module. However, we have not conducted further training for UniVTG on our dataset, NExT-GQA. As shown in Table 2, while our temporal grounding results are slightly weaker than Temp[CLIP] and VGT in the [email protected] metric, our performance is significantly stronger across all other grounding metrics. We believe this demonstrates that MSR-ViR with the pretrained temporal grounding module UniVTG has superior temporal localization capabilities.\\n\\n**Weakness2**\\n\\nSince the temporal grounding module cannot achieve 100% accuracy in identifying the time segments relevant to the question, sampling video frames solely from the grounding results may still result in missing some information. Therefore, the global representation $g_v$ serves as a necessary \\\"back door\\\". However, $g_v$ is not the most critical source of information\\u2014frames sampled from the temporal grounding results and spatial grounding results are more important. To demonstrate this, we have supplemented the ablation study on NExT-QA by removing the temporal grounding frames and spatial grounding frames, leaving only the global representation $g_v$. The results show a significant drop in question-answering accuracy, proving the importance of the grounded frames.\\n\\n| | Tem. | Cau. | Des. | Avg. |\\n| -------------------------------- | -------- | -------- | -------- | -------- |\\n| MSR-ViR | 69.9 | 73.4 | 81.5 | 73.6 |\\n| MSR-ViR(w/o self-reflection) | 67.2 | 72.5 | 80.5 | 72.1 |\\n| MSR-ViR(w/o $g_v$) | 66.9 | 70.1 | 78.0 | 70.4 |\\n| MSR-ViR(w/o instruction prompts) | 68.3 | 72.4 | 82.4 | 72.8 |\\n| MSR-ViR(w/o spatial modules) | 67.0 | 72.5 | 81.4 | 72.2 |\\n| **MSR-ViR (only w/ $g_v$)** | **63.3** | **68.4** | **77.5** | **68.3** |\\n\\n**Weakness3**\\n\\nBased on the reviewer\\u2019s suggestion, we introduced SeViLa\\u2019s testing results on NExT-QA, STAR, and NExT-GQA for comparison with the Llava-Next version of our MSR-ViR framework.\\n\\n| | NExT-QA Tem. | NExT-QA Cau. | NExT-QA Des. | NExT-QA Avg. | STAR-sub Int. | STAR-sub Seq. | STAR-sub Avg. |\\n| ------------------------------- | ------------ | ------------ | ------------ | ------------ | --------------- | ------------- | ------------- |\\n| SeViLa | 69.4 | 74.2 | **81.3** | 73.8 | 63.7 | 70.4 | 67.1 |\\n| MSR-ViR(ours, Llava-Next-based) | **72.2** | **74.6** | 80.9 | **74.9** | **68.9** | **73.1** | **71.0** |\\n\\n| | Acc@GQA | mIoP | [email protected] | [email protected] | mIoU | [email protected] | [email protected] |\\n| ------------------------------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- |\\n| SeViLa | 16.6 | 29.5 | 34.7 | 22.9 | 21.7 | 29.2 | 13.8 |\\n| MSR-ViR(ours, Llava-Next-based) | **18.6** | **29.6** | **39.0** | **24.1** | **23.4** | **33.6** | **16.4** |\"}" ] }
4S9bBbX1be
DriveArena: A Closed-loop Generative Simulation Platform for Autonomous Driving
[ "Xuemeng Yang", "Licheng Wen", "Tiantian Wei", "Yukai Ma", "Jianbiao Mei", "Xin Li", "Wenjie Lei", "Daocheng Fu", "Xing Gao", "Pinlong Cai", "Tao MA", "Min Dou", "Hongsheng Li", "Liang He", "Yong Liu", "Botian Shi" ]
This paper introduces DriveArena, the first high-fidelity closed-loop simulation system designed for driving agents navigating real-world scenarios. DriveArena comprises two core components: Traffic Manager, a traffic simulator capable of generating realistic traffic flow on any global street map, and World Dreamer, a high-fidelity conditional generative model with infinite auto-regression. DriveArena supports closed-loop simulation using road networks from cities worldwide, enabling the generation of diverse traffic scenarios with varying styles. This powerful synergy empowers any driving agent capable of processing real-world images to navigate in DriveArena's simulated environment. Furthermore, DriveArena features a flexible, modular architecture, allowing for multiple implementations of its core components and driving agents. Serving as a highly realistic arena for these players, our work provides a valuable platform for developing and evaluating driving agents across diverse and challenging scenarios. DriveArena takes a significant leap forward in leveraging generative models for driving simulation platforms, opening new avenues for closed-loop evaluation of autonomous driving systems.
[ "Autonomous Driving", "Diffusion Model", "Closed-loop Simulation" ]
Reject
https://openreview.net/pdf?id=4S9bBbX1be
https://openreview.net/forum?id=4S9bBbX1be
ICLR.cc/2025/Conference
2025
{ "note_id": [ "wPLKe0L5Ur", "sRusqgxeMH", "o6OMKseBAv", "kLZsCyg2HE", "jfHUGgygex", "hKxgfkBzP1", "go0Vvh7ECa", "eoMJOIn01V", "dxCDp430J2", "cmZ2JWQu0w", "cOkH50nxqd", "XVaVqgZp7d", "WgrB7lieZ8", "VSXdusenNF", "RR3PFvHVUn", "PGD7W72FZf", "OECAleIDLn", "IjbgPzfjxA", "I0v5Moqz3I", "Ddt1XtGdF7", "CSnRJgZ1Eb", "BdWSmNzoSG", "BOR5AIViUv", "AZqkI4FV1r", "8yvpXoaHUD", "86AuOqReaC", "2Fy2YBEh7B" ], "note_type": [ "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "decision", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review" ], "note_created": [ 1732212579137, 1732008378674, 1732008341546, 1734928162856, 1732007941280, 1733111745804, 1732007374336, 1732246532732, 1732675435427, 1732504090385, 1732008501356, 1733111793584, 1732212581609, 1732008012145, 1730681997162, 1732212575407, 1733112997757, 1737523532389, 1730504000256, 1732503964013, 1732007469254, 1730259191695, 1733115967717, 1732632248946, 1732503807535, 1732008236140, 1730738855349 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission2783/Reviewer_UuYZ" ], [ "ICLR.cc/2025/Conference/Submission2783/Authors" ], [ "ICLR.cc/2025/Conference/Submission2783/Authors" ], [ "ICLR.cc/2025/Conference/Submission2783/Area_Chair_f4Qy" ], [ "ICLR.cc/2025/Conference/Submission2783/Authors" ], [ "ICLR.cc/2025/Conference/Submission2783/Authors" ], [ "ICLR.cc/2025/Conference/Submission2783/Authors" ], [ "ICLR.cc/2025/Conference/Submission2783/Authors" ], [ "ICLR.cc/2025/Conference/Submission2783/Reviewer_i9PT" ], [ "ICLR.cc/2025/Conference/Submission2783/Authors" ], [ "ICLR.cc/2025/Conference/Submission2783/Authors" ], [ "ICLR.cc/2025/Conference/Submission2783/Authors" ], [ "ICLR.cc/2025/Conference/Submission2783/Reviewer_UuYZ" ], [ "ICLR.cc/2025/Conference/Submission2783/Authors" ], [ "ICLR.cc/2025/Conference/Submission2783/Reviewer_JafA" ], [ "ICLR.cc/2025/Conference/Submission2783/Reviewer_UuYZ" ], [ "ICLR.cc/2025/Conference/Submission2783/Reviewer_JafA" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission2783/Reviewer_UuYZ" ], [ "ICLR.cc/2025/Conference/Submission2783/Authors" ], [ "ICLR.cc/2025/Conference/Submission2783/Authors" ], [ "ICLR.cc/2025/Conference/Submission2783/Reviewer_i9PT" ], [ "ICLR.cc/2025/Conference/Submission2783/Authors" ], [ "ICLR.cc/2025/Conference/Submission2783/Reviewer_UuYZ" ], [ "ICLR.cc/2025/Conference/Submission2783/Authors" ], [ "ICLR.cc/2025/Conference/Submission2783/Authors" ], [ "ICLR.cc/2025/Conference/Submission2783/Reviewer_Pxgv" ] ], "structured_content_str": [ "{\"comment\": \"Thank you for the additional experiments! It's great to see that the higher fidelity DriveArena* leads to better open-loop and closed-loop eval metrics!\\n\\nDo you have an explanation for why UniAD closed-loop PDMS score in DriveArena* improved by 50%, while VAD's scores only improved by 5%?\"}", "{\"title\": \"Author Response for Reviewer UuYZ (Part 3)\", \"comment\": \"**Q4: Evaluation of sim-to-real gap: How are the videos generated? Is this in open- or closed-loop?**\\n\\n**A4:** World Dreamer generates images solely based on the input scene layout information, including map layouts and object bounding boxes, without distinguishing whether these layout inputs originate from open-loop or closed-loop simulation.\\n\\nOn our provided webpage, unless specifically noted otherwise, the videos predominantly showcase open-loop driving scenarios. We also demonstrated some closed-loop driving simulations. These open-loop visualization cases stem from a practical limitation: both UniAD and VAD struggle to maintain stable and safe driving in closed-loop environments for extended periods, which prevents us from fully demonstrating DriveArena's capabilities.\\n\\n**Q5: Do you notice any suffering from DAgger issues?**\\n\\n**A5**: Regarding the \\\"DAgger problem\\\", we want to confirm whether you meant to ask about the problem that the driving agent learns on the open-loop nuScenes dataset and suffers from degraded generalization performance on unseen DriveArena scenes.\\n\\nIf so, we indeed observed significant performance limitations when driving agents are trained solely on open-loop datasets and deployed in closed-loop scenarios. As we highlighted in our introduction, existing driving datasets predominantly contain trajectory samples from straightforward driving scenarios, where *agents can achieve reasonable performance by simply maintaining their current speed.* More critically, in open-loop driving environments, each decision made by the agent is based on a relatively safe state. And *when the agent deviates from these safe trajectories, it often struggles to effectively correct its path*.\", \"this_observation_aligns_with_the_fundamental_challenge_in_ad\": \"**agents trained on open-loop data lack the ability to recover from error states**, as they have never encountered such scenarios during training. This limitation is *precisely* one of our motivations for developing a closed-loop high-fidelity simulator, as it allows agents to learn and adapt to a broader range of driving scenarios, including recovery from dangerous corner cases.\"}", "{\"title\": \"Author Response for Reviewer UuYZ (Part 2)\", \"comment\": \"**Q2: It's unclear what role the fidelity of DriveArena plays. In closed-loop eval, UniAD outperforms VAD in DriveArena. It's unclear whether these differences are due to open- / closed-loop model gaps or issues in DriveArena.**\\n\\n**A2:** This is a great question. Let\\u2019s try to discuss this comprehensively.\\n\\n1. By comparing the first and second rows in Table 2, we observe that both VAD and UniAD models show minimal performance differences between the original and generated datasets. This suggests that from the driving agent's perspective, the images generated by World Dreamer in *original nuScenes scenarios* maintain high similarity with the original nuScenes images.\\n2. Through careful analysis, we believe there exists a domain bias between DriveArena and nuScenes. This bias manifests in two key aspects: First, the image quality and fidelity of World Dreamer-generated images still have room for improvement, particularly in terms of temporal consistency. Second, the traffic flow patterns and vehicle interaction behaviors simulated by the Traffic Manager module show certain differences from the original nuScenes dataset in key characteristics such as traffic density. We intentionally maintain these differences to test the generalization capabilities of driving agents.\\n3. In DriveArena's open-loop mode, VAD indeed achieves higher PDMS metrics than UniAD (contrary to nuScenes open-loop results), but this is primarily due to VAD scoring nearly 20% higher in trajectory comfort metrics (C). For other critical metrics like collision avoidance and drivable area compliance, both models perform similarly. \\n \\n However, in closed-loop mode, UniAD outperforms VAD in driving score metrics, mainly because UniAD can drive much longer route, thus achieving higher Route Completion (RC) rates.\\n \\n\\nIn conclusion, we believe that by using DriveArena's comprehensive evaluation standards(both open-loop and closed-loop metrics), we can minimize the impact of sim-to-real gaps and better reflect the inherent capability differences between AD models.\\n\\n**Q3: Would it be possible to evaluate the models on various levels of fidelity in DriveArena to disentangle open- / closed-loop eval from it?**\\n\\n**A3**: Following your suggestion, we conducted additional evaluations in DriveArena using models with different fidelity levels. Specifically, we employed an internally developed World Dreamer generation model with enhanced realism and better temporal consistency, which demonstrates improved performance with a reduced FID score from 16.03 to 14.6. We tested the new model in both open-loop and closed-loop modes, and the results are shown in the following table. (`DriveArena*` denotes the framework integrated with our improved World Dreamer model)\\n\\n| Scenario | Driving Agent | NC \\u2191 | DAC \\u2191 | EP \\u2191 | TTC \\u2191 | C \\u2191 | PDMS \\u2191 |\\n| --- | --- | --- | --- | --- | --- | --- | --- |\\n| DriveArena | VAD | 0.807\\u00b10.11 | 0.950\\u00b10.05 | 0.795\\u00b10.13 | 0.800\\u00b10.12 | 0.913\\u00b10.09 | 0.683\\u00b10.12 |\\n| | UniAD | 0.792\\u00b10.11 | 0.942\\u00b10.04 | 0.738\\u00b10.11 | 0.771\\u00b10.12 | 0.749\\u00b10.16 | 0.636\\u00b10.08 |\\n| DriveArena* | VAD | 0.829\\u00b10.08 | 0.954\\u00b10.05 | 0.767\\u00b10.07 | 0.815\\u00b10.11 | 0.920\\u00b10.10 | 0.687\\u00b10.05 (+0.004) |\\n| | UniAD | 0.843\\u00b10.04 | 0.958\\u00b10.05 | 0.728\\u00b10.06 | 0.829\\u00b10.05 | 0.704\\u00b10.14 | 0.669\\u00b10.02 (+0.033) |\\n| nuScenes GT | Human | 1.000\\u00b10.00 | 1.000\\u00b10.00 | 1.000\\u00b10.00 | 0.979\\u00b10.12 | 0.752\\u00b10.17 | 0.950\\u00b10.06 |\\n\\n| Driving Agent | Sim | PDMS \\u2191 | ADS \\u2191 |\\n| --- | --- | --- | --- |\\n| VAD in `bos_route_1` | DriveArena | 0.5830 | 0.0352 |\\n| | DriveArena* | 0.6140 (+0.0310) | 0.0532 |\\n| UniAD in `bos_route_1` | DriveArena | 0.4952 | 0.0450 |\\n| | DriveArena* | 0.7401 (+0.2449) | 0.0760 |\\n\\n**Open-loop**: When using generated images from a more fidelity World Dreamer, UniAD showed a 5% improvement in open-loop PDMS metrics, while VAD only achieved a 0.5% improvement. \\n\\n**Closed-loop**: More notably, in the closed-loop evaluation on Boston-route 1, UniAD demonstrated a remarkable 50% enhancement in PDMS metrics, while VAD only showed a modest 5% improvement. \\n\\nThese results clearly demonstrate that **the** **improved fidelity of DriveArena directly translates into enhanced driving agent performance**. Moreover, the driving agent exhibits **consistent behavior across WorldDreamer implementations** of varying fidelity levels. This underscores the practical value and effectiveness of our work.\"}", "{\"metareview\": \"This work proposes a closed-loop simulator for autonomous driving. The simulation involves two components: a traffic simulation system that generates traffic flow simulations and a multi-view image generation system for creating images based on the generated traffic and text prompt. The topic of AV simulation is timely and important, and the present work is clearly written. However, it is limited, as the underlying design choices limit geometric and semantic consistency, which ultimately puts the usefulness and contribution of such a platform into question. Given that the authors position this work from an overall system perspective, I am weighing the lack of this consistency higher. The authors are correct in that consistency is not the only thing needed for meaningful AV simulation, but it does seem to be an important requirement for such a work to be broadly useful for the community.\", \"additional_comments_on_reviewer_discussion\": \"The authors addressed most of the concerns in the discussion with the reviewers. While one of the reviewers (Pxgv) did not update their scores or comment on the responses provided by the authors, most of this reviewer's concerns seem to be addressed (with the exception of consistency, which has been raised by other reviewers as well).\"}", "{\"title\": \"Author Response for Reviewer JafA (Part 1)\", \"comment\": \"Dear Reviewer JafA:\\n\\nThank you for your acknowledgment of DriveArena\\u2019s idea and constructive comments. We provide the following discussions and explanations regarding your concerns.\\n\\n**Q1: The core concerns raised involve geometric/semantic consistency issues and temporal coherence problems in the generated videos.**\\n\\n**A1:** Let's discuss geometric/semantic consistency and temporal coherence issues separately.\\n\\n1. Geometric/semantic inconsistency: \\n \\n We acknowledge that World Dreamer lacks geometric and semantic consistency guarantees due to the absence of a 3D model constraint. \\n \\n While some recent studies have attempted to reconstruct 3D scenes from long generated video sequences (e.g., MagicDrive3D[1] and DriveDreamer4D[2]), their reconstruction results are based on generated videos\\u2014a capability that World Dreamer also possesses. Using the generated model to help improve the performance of the reconstructed model is also a direction worth exploring, but it is beyond the scope of our current discussion. We have supplemented the above issues in the updated version of the manuscript (line 198-199).\\n \\n Our approach mitigates these geometric inconsistencies by introducing scene layout conditional constraints, which effectively maintain consistency in road topology and vehicle positioning. As demonstrated in Figure 6 of our manuscript, driving agents can successfully interpret our generated images and accurately extract map and vehicle information.\\n \\n2. Temporal inconsistency :\\n \\n We acknowledge certain limitations in DriveArena, which operates on a single-frame input-output basis. However, we are actively developing a temporal version (as shown in our project page demonstration: https://blindpaper.github.io/DriveArena/#Infinite_Multi-View_Video_Generation) that has already shown promising results. The academic community has made significant breakthroughs in temporal consistency through works like Panacea [3] and DrivingDiffusion[4], and we plan to integrate these advanced generative models into the DriveArena framework to enhance its temporal consistency.\\n \\n\\nWhile improving scene continuity and consistency would undoubtedly enhance simulator performance, but: Is geometric/temporal consistency the only need for current AD technology? We believe that high-fidelity images and closed-loop interactivity are more crucial aspects of a simulator. Our DriveArena approach enables constructing more realistic corner case scenarios (as shown in Appendix A5), allowing for better evaluation of autonomous driving algorithms. This addresses some limitations of existing simulators like Carla and represents a promising research direction. \\n\\n**Q2: It appears that the video diffusion model suffers from mode collapse problems.**\\n\\n**A2**: This can be attributed to the limited training data from the nuScenes dataset, which only contains approximately *4 hours of video data*. Additionally, since World Dreamer employs an autoregressive generation approach, the model indeed exhibits some mode collapse behavior. To address this limitation, one potential solution could be to periodically incorporate style reference images during generation, which might help increase the diversity of the generated images.\\n\\n[1] Gao, Ruiyuan, et al. \\\"MagicDrive3D: Controllable 3D Generation for Any-View Rendering in Street Scenes.\\\"\\u00a0*arXiv preprint arXiv:2405.14475*\\u00a0(2024).\\n\\n[2] Zhao, Guosheng, et al. \\\"Drivedreamer4d: World models are effective data machines for 4d driving scene representation.\\\"\\u00a0*arXiv preprint arXiv:2410.13571*\\u00a0(2024).\\n\\n[3] Wen, Yuqing, et al. \\\"Panacea: Panoramic and controllable video generation for autonomous driving.\\\"\\u00a0*Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*. 2024.\\n\\n[4] Li, Xiaofan, Yifu Zhang, and Xiaoqing Ye. \\\"DrivingDiffusion: Layout-Guided multi-view driving scene video generation with latent diffusion model.\\\"\\u00a0*arXiv preprint arXiv:2310.07771*\\u00a0(2023).\"}", "{\"comment\": \"Dear Reviewer Pxgv,\\n\\n**As we are now in the final day of the extended rebuttal period,** we would greatly value any feedback on our previous responses to ensure we have addressed any of your concerns.\\n\\nWhile we fully understand your busy schedule, we would greatly appreciate any response you could provide during these final hours of the discussion period.\\n\\nAuthors of Submission 2783\"}", "{\"title\": \"Author Response for Reviewer Pxgv (Part 1)\", \"comment\": \"Dear Reviewer Pxgv\\uff1a\\n\\nThank you for your review and comments. We provide the following discussions and explanations regarding your concerns.\\n\\n**Q1: The submission lacks novelty. The proposed DriveArena is just a combination of several existing methods.**\\n\\n**A1:** As acknowledged by other reviewers (e.g., Reviewer JafA: \\\"*connecting them all together ... is appreciated and presents a potential path towards practical generative simulation*\\\". Reviewer UuYZ: \\u201c*Novelty: High-fidelity closed-loop image-based simulation with clear controllability*\\u201d), we are the first pioneering work to apply generative models as AD simulators. We firmly believe that **closed-loop evaluation of driving agents in realistic street scenarios is both necessary and practically valuable**. Moreover, connecting these modules into one closed-loop simulation platform is not trivial. Our modular architecture enables DriveArena to be compatible with different Traffic Managers and World Dreamer methods for simulating various driving agents, which is particularly valuable for the autonomous driving community.\\n\\nBesides, our World Dreamer differs from the DriveDreamer[1] and Vista[2] you mentioned: While DriveDreamer can generate continuous video clips, it cannot guarantee coherence between segments. Vista, as a world model, lacks control over background vehicles. We believe that controllability and long-term temporal coherence are crucial elements for generative models within a simulator. In contrast, our World Dreamer module not only achieves precise control over vehicles in the scene but also enables theoretically infinite-length video generation through an autoregressive paradigm.\\n\\n**Q2: The World Dreamer model is trained primarily on the nuScenes dataset. To improve the model's generalizability, it would be beneficial to incorporate additional datasets.**\\n\\n**A2:** Among various autonomous driving datasets, our choice of nuScenes as the training dataset for World Dreamer was based on several key considerations:\\n\\n1. Representativeness and Popularity: the nuScenes dataset is one of the most widely adopted autonomous driving datasets. With its comprehensive annotations, it has become a benchmark for numerous autonomous driving approaches and generative models. \\n2. Diversity: Unlike the nuPlan dataset which is *limited to sunny daytime scenarios,* nuScenes data collection actually spans multiple cities and weather/daylight conditions, providing necessary diversity.\\n3. 360-degree Surround View: nuScenes provides complete surround-view image data, making it superior to the Waymo dataset, which lacks rear views.\\n\\nWe also demonstrated generalizability through zero-shot inference on the nuPlan dataset (as shown in Appendix Figure 9). The results indicate that despite different camera settings and road network structures, our method can directly adapt to nuPlan's road networks without any training.\\n\\nAdditionally, following your suggestion, we also **trained a new version of World Dreamer using both nuPlan and nuScenes datasets.** By incorporating more diverse driving data, we further enhanced the generative model's generalization capability, enabling it to generate street scenes from Las Vegas and Pittsburgh (please refer to Figure 11 in the revised manuscript).\\n\\n\\n[1] Wang, Xiaofeng, et al. \\\"Drivedreamer: Towards real-world-driven world models for autonomous driving.\\\"\\u00a0*arXiv preprint arXiv:2309.09777*\\u00a0(2023).\\n\\n[2] Gao, Shenyuan, et al. \\\"Vista: A Generalizable Driving World Model with High Fidelity and Versatile Controllability.\\\"\\u00a0*arXiv preprint arXiv:2405.17398*\\u00a0(2024).\"}", "{\"comment\": \"Dear reviewer UuYZ:\\n\\nThank you for your timely response. We are pleased that our additional experiments have effectively addressed your concerns.\\n\\nRegarding the observation that *UniAD demonstrates greater improvements than VAD in the DriveArena\\\\* closed-loop experiment*, we would like to offer our preliminary analysis. As evidenced in Table 1 of Response Part 2, when the World Dreamer's quality improves, UniAD exhibits substantial gains on most metrics, even surpassing VAD's performance indicators. Similarly, Table 2 reveals an even more pronounced enhancement in UniAD's metrics. These two pieces of evidence suggest that **UniAD possesses higher sensitivity to temporal consistency in generated image sequences**, allowing it to achieve more significant improvements when presented with temporally coherent video sequences.\\n\\nWe sincerely appreciate your careful review and feedback, which have helped strengthen our manuscript. Your support and recognition is greatly valued.\"}", "{\"comment\": \"Thank you for your reply! I don't have any other questions.\"}", "{\"comment\": \"Dear Reviewer i9PT,\\n\\nWe sincerely appreciate your time and effort in reviewing our manuscript and offering valuable suggestions .\\n\\n**As the author-reviewer discussion period is approaching its end, and given there will not be a second round of discussions, we would like to confirm whether our responses have effectively addressed your concerns.**\\n\\nIf you require further clarification or have any additional concerns, we remain fully committed to addressing them promptly.\\n\\nBest regards,\\n\\nAuthors of Submission 2783\"}", "{\"title\": \"Author Response for Reviewer i9PT\", \"comment\": \"Dear Reviewer i9PT:\\n\\nThank you for your acknowledgement for our approach and constructive comments. We provide discussions and explanations about your concerns as follows.\\n\\n**Q1: About constructing traffic scenarios from OSM.**\\n\\n**A1:** Thanks for your question. Let me elaborate on how DriveArena utilizes OSM maps and generates traffic scenarios. \\n\\nWhile OSM only contains road-level map information, we employ a two-stage process using SUMO tools for map processing. Initially, we utilize the *OSMWebWizard* tool to download OSM maps and establish a topological roadnet. Subsequently, we employ the *randomTrips* script to create vehicle demands and their origin-destination pairs within the map. These steps constitute the pre-simulation process. During actual simulation, DriveArena's traffic manager module, which is a modified version of LimSim, takes control and manages background vehicle trajectory planning and interactions with the ego vehicle, ensuring vehicles reach their destinations according to the generated traffic demands.\\n\\nWe acknowledge the concern about OSM's varying quality across different regions. To address this limitation, DriveArena supports simulation on any SUMO format representation maps. This flexibility provides users with multiple options: they can modify downloaded OSM maps according to their specific requirements, manually create custom maps, or convert OpenDrive format maps into the supported format. This approach offers users considerable freedom in creating diverse map types suitable for their specific simulation needs.\\n\\nWe have also included a detailed elaboration of this process in Appendix A.2.2 of our revised manuscript. \\n\\n**Q2: Are there any indicators to evaluate the generated results directly, like FID or FVD?** \\n\\n**A2:** To address your concerns, we have included FID metric comparisons below. Our DriveArena achieves a 16.03 FID, which outperforms MagicDrive\\u2019s 16.20.\\n\\n| Method | FID\\u2193 |\\n| --- | --- |\\n| MagicDrive | 16.20 |\\n| DriveArena | 16.03 |\\n\\nHowever, we want to note that FID might not be the ideal metric for evaluation. Some recent works have pointed out that FID combines sample quality and diversity into a single value, making it unable to distinguish between these two important aspects [1] and lacking interpretability [2]. We believe using autonomous driving algorithms for fidelity evaluation is more intuitive and interpretable.\\n\\n**Q3: It seems that Figure 3 is not in vector format.**\\n\\n**A3:** Thank you for your careful observation! We apologize for affecting your reading experience. We have replaced this image with a better version in the revised version.\\n\\n[1] Kynk\\u00e4\\u00e4nniemi, Tuomas, et al. \\\"Improved precision and recall metric for assessing generative models.\\\"\\u00a0*Advances in neural information processing systems*\\u00a032 (2019).\\n\\n[2] Naeem, Muhammad Ferjad, et al. \\\"Reliable fidelity and diversity metrics for generative models.\\\"\\u00a0*International Conference on Machine Learning*. PMLR, 2020.\"}", "{\"comment\": \"Dear Reviewer JafA,\\n\\nAs we are now in the final day of the extended rebuttal period, we would greatly value any feedback on our previous responses to ensure we have addressed any of your concerns.\\n\\n**Even a brief confirmation would be immensely helpful for us.**\\n\\nWhile we fully understand your busy schedule, we would greatly appreciate any response you could provide during these final hours of the discussion period.\\n\\nAuthors of Submission 2783\"}", "{\"comment\": \"Thank you for the answers! Yes, my question targeted the data distribution shift between open-loop training data and closed-loop rollouts with respect to the agent \\\"states\\\" (particularly what you call \\\"error states\\\").\"}", "{\"title\": \"Author Response for Reviewer JafA (Part 2)\", \"comment\": \"**Q3: The AV results in Tables 1 and 2 still show a significant gap to real data, there is still much work to be done to leverage it for practical AV development.**\\n\\n**A3:** Compared to the MagicDrive baseline, DriveArena demonstrates significant improvements in metrics such as segmentation and agent collision rate. However, indeed, as shown in Tables 1 and 2, there is still a noticeable gap between DriveArena's generated images and nuScenes GT. As you pointed out, DriveArena, being the first generative controllable closed-loop simulation platform, still has room for improvement. \\n\\nFor instance, due to current computational resource constraints, DriveArena generates images at a resolution of 224\\\\*400, which is four times smaller than the original nuScenes images (900\\\\*1600). This resolution difference significantly impacts various fidelity metrics. One of our future directions is to increase the generation resolution to further reduce the sim-to-real gap in DriveArena.\\n\\n**Q4: How does the multi-frame auto-regressive version of the diffusion model work in practice? Further, how are the generated videos used? Are T frames predicted but only the 1st one is shown to the ego-policy, and then a new prediction is made (similar to model-predictive control)?**\\n\\n**A4:** As mentioned in the paper, in the multi-frame version, we reference multiple past frames and output multi-frame images with additional temporal modules, which helps the diffusion model better capture the motion patterns between frames and generate videos with improved temporal consistency.\\n\\nFor driving agents trained on nuScenes, i.e. UniAD, they follow a planning frequency of 2 Hz, while our Traffic Manager module operates at 10Hz. When the simulation starts, the temporal version of our generation model outputs at 10Hz, generating 7 frames each time, where the first 2 frames overlap with the last two frames from the previous output, resulting in 5 new generated frames. We then feed the last generated frame to the driving agent running at 2Hz for the next planning step. This approach ensures that the generated videos appear more continuous and smooth while maintaining proper closed-loop simulation.\", \"preliminary_results_of_the_temporal_version_can_be_found_at_the_end_of_our_project_website__https\": \"//blindpaper.github.io/DriveArena/#Infinite_Multi-View_Video_Generation. We hope this addresses your questions about the multi-frame World Dreamer version.\"}", "{\"summary\": \"This work presents a generative simulation framework for autonomous driving. In particular, a layout-conditional diffusion model is proposed as a sensor simulator, with bounding boxes and road graphs serve as underlying world state.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The topic of generative simulation is an especially timely one for the AV field, with better and better generative video models coming out regularly and frameworks such as this one being able to capitalize on these parallel investments.\\n\\nWhile all of the individual pieces of this framework have existed before, connecting them all together into a usable simulation framework for the broader community to use is appreciated and presents a potential path towards practical generative simulation.\\n\\nThe paper is written well and it is easy to follow the core ideas as presented.\\n\\nLeveraging layout-conditional generation is a sound and sensible idea for maintaining consistency across time.\", \"weaknesses\": \"The core weakness is that any geometric or semantic consistency is not guaranteed. Layout conditioning certainly helps, but in the linked webpage's videos there are clear inconsistencies across timesteps (e.g., car color and types changing over time). This is something that is not brought up in Lines 190 - 198, but perhaps should be as it is the core reason why works leverage NeRF or 3D Gaussian Splatting (for their geometric/semantic/temporal consistency over purely-2D generative models).\\n\\nWhile static images of generations appear to be of good quality, there are significant temporal consistency issues when viewed as part of a video on the linked project page (most videos appear to be static even with the ego-vehicle theoretically moving forward in the world). Do the authors have any idea for why that is? It almost appears that the video diffusion model suffers from mode collapse when tasked with generating building walls (taking example from the linked webpage).\\n\\nThe AV results in Tables 1 and 2 still show a significant gap to real data, indicating that, while the core points of DriveArena are sensible, there is still much work to be done to leverage it for practical AV development.\", \"questions\": \"In Lines 261-263 it is written \\\"We also verify that extending to a multi-frame auto-regressive version (using multiple past frames as reference and outputting multi-frame images) and adding additional temporal modules can enhance temporal consistency.\\\" - How does this work in practice? In closed-loop, the ego-vehicle can drive however it wants and so there can still be inconsistencies between generated videos at time t and t+1, right? Further, how are the generated videos used? Are T frames predicted but only the 1st one is shown to the ego-policy? And then a new prediction is made (similar to model-predictive control)?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for the reply and additional data points. I totally agree that FID is not an ideal metric. My main concern was that the metrics provided in Table 1 were produced by chaining several ML models, which makes attribution difficult, particularly when using driving agents, which might also suffer from distribution shifts.\\n\\nYour idea behind downsampling and upsampling nuScenes and adding it to Table 1 is great. The results clearly indicate the correlation between data quality and your eval metrics (except for L2 at 1s), which helps build trust in your fidelity evaluation!\"}", "{\"title\": \"Apologies for the delayed response\", \"comment\": \"My apologies for the delay, the rebuttal has cleared up all the questions I had and I am happy to maintain my score (ideally ICLR would allow us to use ratings like 6.5 or 7, as I have no remaining questions and all the weaknesses are plainly stated/understood, but I also don't want to greatly increase my score to an 8).\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"This work presents DriveArena, an image-based high-fidelity simulator for closed-loop simulation of agents for autonomous driving. DriveArena consists of a Traffic Manager and a World Dreamer, and its modular design allows for easy replacement of each of the components. Traffic Manager enables the dynamic control of all traffic agents and supports various HD maps, both of which are inputs to World Dreamer, which uses conditional diffusion to generate realistic images. The diffusion model is conditioned on map and object layout and generates images autoregressively for temporal consistency and variable length simulations.\\n\\nDriveArena is evaluated in two ways. First, its fidelity is evaluated, among others, with UniAD's performance. Then, open- and closed-loop evaluation of VAD and UniAD models is performed.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"Highly relevant research direction: The work correctly argues for the need of closed-loop evaluation of autonomous driving behavior models.\", \"Novelty: High-fidelity closed-loop image-based simulation with clear controllability (e.g. via text prompts).\", \"Performance: Evaluation of sim-to-real gap shows superiority over MagicDrive and reasonable results for open-loop and closed-loop behavior eval.\", \"Well presented: The paper is easy to follow and all information is well presented.\"], \"weaknesses\": [\"Evaluation of sim-to-real gap: The presented evaluation is pretty short and for example lower L2 distances do not necessarily imply higher quality images. Additional evaluation of the fidelity would be helpful. Are there perception metrics such as FID that could be used or other metrics that compare statistics between ground truth and generated images? Otherwise, user studies are another possibility to judge the quality.\", \"Unclear takeaway from VAD vs. UniAD open- and closed-loop comparison: In open-loop, UniAD performs better on nuScenes but worse on DriveArena than VAD. This difference is explained with better open-loop generalization of VAD. However, it's unclear what role the fidelity of DriveArena plays. Is it possible to e.g. run an experiment with different DriveArena datasets, some that are closer and some that are further from nuScenes? In closed-loop eval, UniAD outperforms VAD in DriveArena. It's unclear whether these differences are due to open- / closed-loop model gaps or issues in DriveArena. I acknowledge that this difficulty of correct attribution is inherent to research in this area but you might be able to shed more light on this. For example, would it be possible to evaluate the models on various levels of fidelity in DriveArena to disentangle open- / closed-loop eval from it?\"], \"questions\": [\"Evaluation of sim-to-real gap: How are the videos generated? Is this in open- or closed-loop? For correctness, it should be closed-loop. If so, do you notice any suffering from DAgger issues?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Reviewer JafA,\\n\\nWe sincerely appreciate your time and effort in reviewing our manuscript and offering valuable suggestions .\\n\\n**As the author-reviewer discussion period is approaching its end, and given there will not be a second round of discussions, we would like to confirm whether our responses have effectively addressed your concerns.**\\n\\nShould you require any additional clarification or have remaining questions, we remain fully committed to addressing them.\\n\\n\\nBest regards,\\n\\nAuthors of Submission 2783\"}", "{\"title\": \"Author Response for Reviewer Pxgv (Part 2)\", \"comment\": \"**Q3: The model should be able to generate the same scene captured from different positions (similar to actual scenarios of driving differently in the same scene). No visualization was found addressing such an issue.**\\n\\n**A3:** To address your concern, we have included a new visualization of the \\\"same scene from different viewpoints\\u201d in the revised manuscript (Figure 8 in the appendix). As shown, the road network and traffic participants remain consistent across two scenes, with the ego vehicle position shifting from the leftmost lane to the middle lane. While there are minor variations in front vehicle colors and street backgrounds, World Dreamer successfully maintains spatial consistency in lane markings and surrounding vehicle positions while preserving similar street styles and building configurations. \\n\\nThis capability is achieved because World Dreamer uses both lane lines and 3D bounding boxes from multi-view as control conditions, along with reference images for style guidance. This enables DriveArena to maintain similarity when generating images of the same scene from different positions. However, we acknowledge that since World Dreamer doesn't incorporate a 3D scenario model to constrain geometric consistency (which is only achievable with 3DGS and NeRF-like methods), diffusion-based models theoretically cannot guarantee \\\"completely identical\\\" visual sequences when capturing the same scene from different positions.\\n\\n**Q4: Experiments are not enough. The submission primarily focuses on the simulation platform itself rather than an in-depth evaluation of various driving agents within the platform.**\\n\\n**A4:** We respectfully disagree with the reviewer's assessment regarding insufficient experimentation. Our experimental section encompasses comprehensive evaluations of both World Dreamer performance (including Fidelity, Controllability, and Scalability) and driving agent open-loop and closed-loop experiments. Additionally, in the appendix, we have provided extensive supplementary materials showcasing both successful and failed simulation cases of driving agents within DriveArena, as well as experiments demonstrating DriveArena's capability to generate collision corner cases.\\n\\nThe core of our work lies in validating the feasibility of a generative model-based closed-loop simulator. Through the integration of driving agents, we have successfully demonstrated DriveArena's ability to produce high-fidelity, interactive environments for agent evaluation. As a pioneering work, DriveArena has already supported two mainstream open-source end-to-end AD agents: UniAD and VAD, subjecting them to thorough open-loop and closed-loop evaluations. Furthermore, we have introduced PDMS and Arena Driving score metrics to comprehensively assess agent performance.\\n\\nThrough DriveArena's modular design, we are committed to collaborating with the AD community to enhance the platform. Our goal is to incorporate a wider variety of autonomous driving agents in the future, establishing DriveArena as a \\u201creal arena\\u201d for autonomous driving algorithms and providing a standardized testing environment for assessment and comparison for both academia and industry.\\n\\n**Q5: Minor issues in writing and presentation. For example, The figures are not vectorized for zooming in and they are suggested to be replaced.**\\n\\n**A5:** Thank you for your careful observation regarding the figure quality. We apologize for affecting your reading experience. We have replaced Figure 3 with high-resolution vectorized versions in the revised manuscript PDF.\"}", "{\"summary\": \"The paper proposes a traffic simulation platform for testing autonomous driving algorithms. The simulation architecture is built based on LimSim, using Monte Carlo tree search for vehicle motion planning. A diffusion-based renderer is applied to achieve realistic and controllable surround images for vision-based driving agents. The platform supports driving scenario construction from nuScenes and OSM; codes and tools are open-source for the community to use.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"Compared to previous work using video generation methods as world models to achieve realistic traffic simulation, this work uses a two-step pipeline, including rule-based traffic simulation in geometric space and diffusion-based video generation conditioned on trajectory layouts of vehicles. I believe this approach can achieve better physical realism and temporal consistency.\\n\\nText prompts are introduced to achieve diverse driving scenarios and plenty of demos are presented to clearly show the generation results.\\n\\nThe codes are well organized with a modularized design and the whole platform is open-source to better support downstream tasks in research of autonomous driving.\", \"weaknesses\": \"The author says that one of the main contributions is scalability, which means simulation on any region can be achieved by using map info from OSM. As far as I know, OSM only contains road-level map information, and extra efforts like completing lane topology and planning vehicle OD are needed to construct simulations based on it, this part of the work seems unclear in this paper.\\n\\nAs the dreamer is the most important part of this paper, it would be better if the author could provide some indicators that can directly evaluate the generated results, like FID.\", \"minor_note\": \"it seems that Figure 3 is not in vector format.\", \"questions\": [\"About constructing traffic scenarios from OSM.\", \"How is the HD map built from the OSM data, and how is traffic demand generated in this kind of scenario?\", \"OSM maps are not high quality in many areas, is there a way to solve this?\", \"Is this part of the work mainly based on the tools provided by SUMO or LimSim?\", \"Are there any indicators to evaluate the generated results directly, like FID or FVD?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you very much for your response and for confirming that our rebuttal has successfully addressed all your questions.\\n\\nWe greatly appreciate your acknowledgment of our manuscript's merits and understand your position regarding the scoring granularity.\"}", "{\"comment\": \"Thank you for your answer as well as your previous answers and updates. I have updated the rating accordingly!\"}", "{\"comment\": \"Dear Reviewer Pxgv,\\n\\nWe sincerely appreciate your time and effort in reviewing our manuscript.\\n\\nWe have provided detailed responses to your concerns several days ago. **As the author-reviewer discussion period is approaching its end, and given there will not be a second round of discussions, we would like to confirm whether our responses have effectively addressed your concerns.**\\n\\nShould you require any additional clarification or have remaining questions, we remain fully committed to addressing them.\\n\\n\\nBest regards,\\n\\nAuthors of Submission 2783\"}", "{\"title\": \"Author Response for Reviewer UuYZ (Part 1)\", \"comment\": \"Dear Reviewer UuYZ:\\n\\nThank you for your acknowledgment and constructive comments. We provide discussions and explanations about your concerns as follows.\\n\\n**Q1: Regarding the evaluation of sim-to-real gap: Additional evaluation of the fidelity would be helpful. Could perception metrics like FID be used between ground truth and generated images?**\\n\\n**A1:** To address your concerns, we have included FID metric comparisons below. However, we want to note that FID might not be the ideal metric for evaluation. Some works have pointed out that FID combines sample quality and diversity into a single value, making it unable to distinguish between these two important aspects [1] and lacking interpretability[2]. \\n\\n| Method | FID\\u2193 |\\n| --- | --- |\\n| MagicDrive | 16.20 |\\n| DriveArena | 16.03 |\\n\\nWe believe using driving agents for fidelity evaluation is more intuitive and interpretable. In Tables 1 and 2 of the manuscript, we present various metrics including 3D object detection, map segmentation, and other planning metrics to measure the quality of generated images. It's worth noting that in Table 1, due to computational resource constraints, both DriveArena and MagicDrive generate single images at 224\\u00d7400 resolution, which are then upsampled 4x before being input to UniAD for inference. This inevitably introduces some performance loss.\\n\\nFor comparison, we added a row in Table 1 showing results when the original nuScenes images are downsampled by 4x and then upsampled back to the original resolution. As shown in the second row of the table below, there is also some performance degradation compared to the original nuScenes dataset. When the resolution of generated images increases, these perception metrics are expected to improve[3].\\n\\n| Data Source | 3DOD | | BEV Segmentation mIoU (%) | | | | L2 (m)\\u2193 | | | | Col. Rate (%)\\u2193 | | | |\\n| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |\\n| | mAP\\u2191 | NDS\\u2191 | Lanes\\u2191 | Drivable\\u2191 | Divider\\u2191 | Crossing\\u2191 | 1.0s | 2.0s | 3.0s | Avg. | 1.0s | 2.0s | 3.0s | Avg. |\\n| ori nuScenes | 37.98 | 49.85 | 31.31 | 69.14 | 25.93 | 14.36 | 0.51 | 0.98 | 1.65 | 1.05 | 0.10 | 0.15 | 0.61 | 0.29 |\\n| **nuScenes w/ downsample** | 31.20 | 45.22 | 29.19 | 65.83 | 23.51 | 12.99 | 0.60 | 1.10 | 1.85 | 1.18 | 0.08 | 0.28 | 0.66 | 0.34 |\\n| MagicDrive | 12.92 | 28.36 | 21.95 | 51.46 | 17.10 | 5.25 | 0.57 | 1.14 | 1.95 | 1.22 | 0.10 | 0.25 | 0.70 | 0.35 |\\n| DRIVEARENA | 16.06 | 30.03 | 26.14 | 59.37 | 20.79 | 8.92 | 0.56 | 1.10 | 1.89 | 1.18 | 0.02 | 0.18 | 0.53 | 0.24 |\\n\\n[1] Kynk\\u00e4\\u00e4nniemi, Tuomas, et al. \\\"Improved precision and recall metric for assessing generative models.\\\"\\u00a0*Advances in neural information processing systems*\\u00a032 (2019).\\n\\n[2] Naeem, Muhammad Ferjad, et al. \\\"Reliable fidelity and diversity metrics for generative models.\\\"\\u00a0*International Conference on Machine Learning*. PMLR, 2020.\\n\\n[3] Gao, Ruiyuan, et al. \\\"Magicdrive: Street view generation with diverse 3d geometry control.\\\"\\u00a0*arXiv preprint arXiv:2310.02601*\\u00a0(2023).\"}", "{\"summary\": \"The submission introduces DriveArena, a high-fidelity closed-loop simulation platform designed for testing and developing autonomous driving agents in real-world scenarios. The platform consists of two main components: the Traffic Manager and the World Dreamer. The Traffic Manager is responsible for generating realistic traffic flow on any global street map, while the World Dreamer is a high-fidelity conditional generative model that creates infinite autoregressive simulations. DRIVEARENA enables the generation of diverse traffic scenarios with varying styles and allows driving agents that can process real-world images to navigate within its simulated environment.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. This submission targets an important problem in the field of autonomous driving: how to properly evaluate the performance of end-to-end systems. The submission introduces a closed-loop evaluation method, which is more reflective of real-world driving conditions compared to open-loop evaluations. It will be useful for practical applications.\\n2. The platform utilizes road networks from cities worldwide and allows for the generation of diverse traffic scenarios with varying styles, which is essential for training and evaluating driving agents across different driving environments. \\n3. The submission provides a clear and detailed explanation of the technical aspects of DriveArena. The figures, tables, and appendices enhance the understanding of the system's components and their interactions.\", \"weaknesses\": \"1. The submission lacks novelty. The proposed DriveArena is just a combination of several existing methods: LimSim for traffic simulation and condition generation; DriveDreamer/Vista for generating images from the conditions; NAVSIM and Carla for closed-loop evaluation.\\n2. The World Dreamer model is trained primarily on the nuScenes dataset, which may not capture diverse driving scenarios. To improve the model's generalizability, it would be beneficial to incorporate additional datasets that represent different geographical locations, driving cultures, and road conditions.\\n3. This submission fails to address an important issue for closed-loop evaluation: the model should be able to generate the same scene captured from different positions (similar to actual scenarios of driving differently in the same scene). No visualization was found addressing such an issue. \\n4. Experiments are not enough. The submission primarily focuses on the simulation platform itself rather than an in-depth evaluation of various driving agents within the platform. Expanding the experimental section to include a broader range of driving agents and more extensive testing can help provide a clearer picture of DRIVEARENA's capabilities and limitations.\\n5. Minor issues in writing and presentation. For example, The figures are not vectorized for zooming in and they are suggested to be replaced.\", \"questions\": \"Can authors provide visualizations of generating consistent scenes based on slightly different conditions (e.g., the ego car moves differently), which is a key aspect for closed-loop evaluation?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
4S2L519nIX
Pushing the Limits of All-Atom Geometric Graph Neural Networks: Pre-Training, Scaling, and Zero-Shot Transfer
[ "Zihan Pengmei", "Zhengyuan Shen", "Zichen Wang", "Marcus D. Collins", "Huzefa Rangwala" ]
The ability to construct transferable descriptors for molecular and biological systems has broad applications in drug discovery, molecular dynamics, and protein analysis. Geometric graph neural networks (Geom-GNNs) utilizing all-atom information have revolutionized atomistic simulations by enabling the prediction of interatomic potentials and molecular properties. Despite these advances, the application of all-atom Geom-GNNs in protein modeling remains limited due to computational constraints. In this work, we first demonstrate the potential of pre-trained Geom-GNNs as zero-shot transfer learners, effectively modeling protein systems with all-atom granularity. Through extensive experimentation to evaluate their expressive power, we characterize the scaling behaviors of Geom-GNNs across self-supervised, supervised, and unsupervised setups. Interestingly, we find that Geom-GNNs deviate from conventional power-law scaling observed in other domains, with no predictable scaling principles for molecular representation learning. Furthermore, we show how pre-trained graph embeddings can be directly used for analysis and synergize with other architectures to enhance expressive power for protein modeling.
[ "Geometric Graph Neural Networks", "Self-supervised Pre-training", "Scaling", "Zero-shot Transfer", "Molecular Representation" ]
Accept (Poster)
https://openreview.net/pdf?id=4S2L519nIX
https://openreview.net/forum?id=4S2L519nIX
ICLR.cc/2025/Conference
2025
{ "note_id": [ "ztPC1DrSss", "yvP28hH2X1", "wQCvYJpQTi", "vod3GXohVr", "vejCidlEkZ", "ulccz5KeKN", "s5srCf3t5d", "rOqnUZxQGE", "px86nLcQpt", "pdYNTE134D", "pVzJbY8bCQ", "lIFvgZ6RLH", "j2CpFh5XDZ", "iLzqY2oQwy", "hi1CPNEGxR", "g4v3YcqMF2", "b54xhhiCIj", "ZzWZ2KmHVr", "YFtQnT8Jiw", "TuS9JvgkKb", "SLP2zdlxlk", "R6tfZ9qbLB", "OFwBcyu06d", "NKH6LQZls4", "NFXXe398YO", "JOE9NkDDLm", "J07rspoHIc", "IdYcqHOoNb", "GpLPMkYbdX", "Erryvk57xL", "BA0GAzobt8", "Ap6zcHjTcB", "9EEx39snB7", "8zKhX9ZdtU", "5w3V5g9rTX", "4YgRwsRXXn", "43bMV4gTa2", "1JWjDxBWXu", "1I1ht8DFIS", "0HAkBF8ac9" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "decision", "official_comment", "official_comment" ], "note_created": [ 1732468558148, 1732462426904, 1732142203908, 1732820180148, 1732252575444, 1732287194263, 1733179452445, 1732346021967, 1732808505569, 1733218408821, 1732901549330, 1732521563235, 1730535279718, 1732469282822, 1732469325691, 1732289625642, 1732144449571, 1732871460909, 1732143174165, 1732646376563, 1733247243007, 1730579990907, 1732143363650, 1732530450417, 1732141089766, 1733130072191, 1732143653717, 1732289563536, 1734722370758, 1732267782908, 1732662362871, 1732142750028, 1732140875355, 1730645400667, 1729954335659, 1732343431439, 1732351688703, 1737523897045, 1732293651174, 1732142386693 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission8250/Authors" ], [ "ICLR.cc/2025/Conference/Submission8250/Reviewer_GND7" ], [ "ICLR.cc/2025/Conference/Submission8250/Authors" ], [ "ICLR.cc/2025/Conference/Submission8250/Authors" ], [ "ICLR.cc/2025/Conference/Submission8250/Area_Chair_8tuF" ], [ "ICLR.cc/2025/Conference/Submission8250/Reviewer_5LXW" ], [ "ICLR.cc/2025/Conference/Submission8250/Authors" ], [ "ICLR.cc/2025/Conference/Submission8250/Reviewer_i3nd" ], [ "ICLR.cc/2025/Conference/Submission8250/Reviewer_5LXW" ], [ "ICLR.cc/2025/Conference/Submission8250/Reviewer_5LXW" ], [ "ICLR.cc/2025/Conference/Submission8250/Authors" ], [ "ICLR.cc/2025/Conference/Submission8250/Reviewer_KcDu" ], [ "ICLR.cc/2025/Conference/Submission8250/Reviewer_i3nd" ], [ "ICLR.cc/2025/Conference/Submission8250/Authors" ], [ "ICLR.cc/2025/Conference/Submission8250/Authors" ], [ "ICLR.cc/2025/Conference/Submission8250/Authors" ], [ "ICLR.cc/2025/Conference/Submission8250/Authors" ], [ "ICLR.cc/2025/Conference/Submission8250/Reviewer_5LXW" ], [ "ICLR.cc/2025/Conference/Submission8250/Authors" ], [ "ICLR.cc/2025/Conference/Submission8250/Authors" ], [ "ICLR.cc/2025/Conference/Submission8250/Authors" ], [ "ICLR.cc/2025/Conference/Submission8250/Reviewer_KcDu" ], [ "ICLR.cc/2025/Conference/Submission8250/Authors" ], [ "ICLR.cc/2025/Conference/Submission8250/Authors" ], [ "ICLR.cc/2025/Conference/Submission8250/Authors" ], [ "ICLR.cc/2025/Conference/Submission8250/Reviewer_5LXW" ], [ "ICLR.cc/2025/Conference/Submission8250/Authors" ], [ "ICLR.cc/2025/Conference/Submission8250/Authors" ], [ "ICLR.cc/2025/Conference/Submission8250/Area_Chair_8tuF" ], [ "ICLR.cc/2025/Conference/Submission8250/Reviewer_KcDu" ], [ "ICLR.cc/2025/Conference/Submission8250/Area_Chair_8tuF" ], [ "ICLR.cc/2025/Conference/Submission8250/Authors" ], [ "ICLR.cc/2025/Conference/Submission8250/Authors" ], [ "ICLR.cc/2025/Conference/Submission8250/Reviewer_GND7" ], [ "ICLR.cc/2025/Conference/Submission8250/Reviewer_5LXW" ], [ "ICLR.cc/2025/Conference/Submission8250/Authors" ], [ "ICLR.cc/2025/Conference/Submission8250/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission8250/Authors" ], [ "ICLR.cc/2025/Conference/Submission8250/Authors" ] ], "structured_content_str": [ "{\"title\": \"Thank you\", \"comment\": \"We really appreciate the reviewer for helping us improving the manuscript quality. Wish you a good break :)\\n\\nKind Regards,\\nAuthors\"}", "{\"comment\": \"I thank the authors for their careful response, including restructuring and clarifying their research question, which addresses my primary feedback. I am happy to increase my score.\\n\\nA small point of clarification, TDC (https://tdcommons.ai/overview) is a collection of benchmarks beyond small molecules, and includes tasks for macromolecules and peptides. Additional clarifications around the limitations and scope of QM9 address my concerns.\"}", "{\"title\": \"Response to Reviewer KcDu Part 1\", \"comment\": \"## Summary\\n\\nWe appreciate the thoughtful comments and summary from reviewer **KcDu**, which have helped improve the quality of the manuscript. Before proceeding to further discussions, we would like to clarify several key points.\\n\\nOur paper does not focus on extending denoising pre-training techniques, investigating pre-training task choices, or developing new model architectures. Instead, it addresses a previously unanswered research question in graph representational learning: \\n**\\\"Are pre-trained all-atom geometric graph neural network (GNN) representations transferable to protein modeling, and how expressive are they?\\\"**\\n\\nTo answer this research question, our contributions are as follows:\\n1. **Scaling Behaviors of Geometric GNNs:** \\n We studied the scaling behaviors of state-of-the-art geometric GNNs in unsupervised, self-supervised, and supervised setups, rather than focusing on creating new architectures or pre-training objectives. \\n2. **Demonstrating Transferability:** \\n We pre-trained these GNNs on small molecular datasets and demonstrated their transferability to proteins with all-atom resolution, highlighting their expressiveness in these settings.\\n\\n---\\n\\n### Performance Increase on VAMP and Fold Classification\\n\\nWe believe the observed performance increases on the VAMP objective and fold classification task do not stem from specific pre-training techniques. Instead, the improvements result from the fact that we pre-trained geometric GNNs, then transferred them in a zero-shot fashion and organically combined with higher-level architectures (Page 5 Table 1). Similarly, the reason why the fold classification results (Page 30 Figure 20) got improved originates from the fact that the inferred graph embedding from pre-trained GNNs contain rich all-atom information instead of applying denoising pre-training to downstream tasks. (And we did not) \\n\\n### Zero-shot Transfer Learners \\nWe appreciate the reviewer for noticing the difference of zero-shot learning setup here when comparing to natural language modeling and computer vision. Using masked language modeling as an example, language models trained to complete the sentence in English can transfer to complete the sentence in Chinese. (Same task, different data domain) In our setup, the backbone network is trained on small molecules with roughly 10-20 heavy atoms with denoising objective, and the backbone network is transferred to infer the atomistic embedding of peptides and proteins and then a separate head is trained with a different objective. (Different task, different data domain) Since the backbone network has never seen protein systems, we think it\\u2019s appropriate to claim it as zero-shot transfer for embedding inference.\\n\\n### About other possible pre-training objectives\\nWe appreciate the reviewer for suggesting other possible pre-training objectives. Indeed, there are many other pre-training strategies, such as Noisy Node [1], GraphMVP [2], Frad [3], Uni-Mol [4], etc. The scope of our paper does not lay in comparing the effectiveness of various pre-training strategies nor absolute performance. We chose coordinate denoising as our pre-training objective for its proven effectiveness and simplicity, as the coordinate denoising just has the level of additive Gaussian noise as the only hyperparameter. Combining other objectives inevitably requires more complicated hyperparameter scanning. Considering our extensive experiments in both pre-training and downstream tasks, it would be impractical to contain those comparisons in this paper given the computational budget, where including those comparisons could easily multiplicatively increase the workload.\\n\\n### Adding a node mask objective\\nIn kinetic modeling task, we do not think add a node attribute masking pre-training task (e.g. flipping the atom types) would considerably affect the results, since VAMP aims to find the slow modes of the objective movements, as structural information in a single molecular system where all atom types are fixed across samples. Generally, if the two pre-training objectives focus on different perspectives (coordinate denoising and guessing the right atom type), more parameters would be needed to reach the same loss on each pre-training objective.\\n\\n[1] Godwin et al. \\\"Simple GNN Regularisation for 3D Molecular Property Prediction and Beyond.\\\" International Conference on Learning Representations (ICLR), 2022\\n[2] Liu et al. \\\"Pre-training molecular graph representation with 3D geometry.\\\" arXiv preprint arXiv:2110.07728, 2021.\\n[3] Feng et al. \\\"Fractional Denoising for 3D Molecular Pre-training.\\\" International Conference on Machine Learning (ICML), 2023, pp. 9938\\u20139961. PMLR.\\n[4] Zhou et al. \\\"Uni-mol: A universal 3D molecular representation learning framework.\\\" 2023.\"}", "{\"comment\": \"## Regarding datasets\\n\\nWe do not want to judge whether one particular dataset is \\u2018far from\\u2019 real-world or not. However, we could provide those references [1][2][3] here again, as we did in the paper. We also would like to mention that both PyEMMA [4] and Deeptime [5] are widely used packages in the molecular dynamics community. We will add the corresponding citations in the revised version as well.\\n\\n## Regarding QM9 dataset\\n\\nWhile the reviewer acknowledged that random-split QM9 is not a good benchmark, there is no benefit in performing an additional set of experiments with 5 (tasks) \\u00d7 5 (dimensions) \\u00d7 2 (models) \\u00d7 2 (pretrained or not), especially since this is not the central focus of the paper. Moreover, benchmarking on an improperly handled dataset, such as random-split QM9, could lead to false conclusions, e.g., the existence of power-law scaling for those tasks. We chose the first five targets to investigate the effect of pre-training and scaling, as five targets are already sufficient to observe the pattern and support our claims, especially considering the wide scope and configurations explored in the paper.\\n\\n## Regarding Geom-GNNs\\nFrom lines 033\\u2013039, we indicated that Geometric Graph Neural Networks (Geom-GNNs) refer to a class of GNNs that operate with coordinates. From lines 733\\u2013741, we briefly extend this discussion. Furthermore, in both the figure and caption of **Figure 1**, we clearly showed a framework where pre-trained Geom-GNNs act as local geometric descriptors to featurize residue-level conformations. We believe the reviewer has already noticed that we chose two architectures, ViSNet [6] and ET [7], as these were properly cited in the paper.\\n\\nIt is also a common practice to use a smaller learning rate for pre-trained weights than for randomly initialized weights. Furthermore, it is difficult to see the point of comparing an apple to a pear.\\n\\n## Regarding formats\\n\\nWe thank the reviewer for the formatting suggestions. We also clearly indicated in the paper which figures are expected to be found in the appendix as **Figure X (Appendix)**. Moreover, we chose to represent those results as figures because either they cannot be effectively represented as tables or they are easier to interpret as figures.\\n\\n\\n### References\\n\\n1. Mardt, A., Pasquali, L., Wu, H., et al. *VAMPnets for deep learning of molecular kinetics.* Nature Communications, 9, 5 (2018). [https://doi.org/10.1038/s41467-017-02388-1](https://doi.org/10.1038/s41467-017-02388-1)\\n\\n2. N\\u00fcske, F., Wu, H., Prinz, J.-H., Wehmeyer, C., Clementi, C., & No\\u00e9, F. *Markov state models from short non-equilibrium simulations\\u2014Analysis and correction of estimation bias.* Journal of Chemical Physics, 146(9), 094104 (2017). [https://doi.org/10.1063/1.4976518](https://doi.org/10.1063/1.4976518)\\n\\n3. Bowman, G. R., Voelz, V. A., & Pande, V. S. *Atomistic folding simulations of the five-helix bundle protein \\u03bb6\\u221285.* Journal of the American Chemical Society, 133(4), 664-667 (2011). [https://doi.org/10.1021/ja106844r](https://doi.org/10.1021/ja106844r)\\n\\n4. Scherer, M. K., Trendelkamp-Schroer, B., Paul, F., P\\u00e9rez-Hern\\u00e1ndez, G., Hoffmann, M., Plattner, N., ... & No\\u00e9, F. *PyEMMA 2: A software package for estimation, validation, and analysis of Markov models.* Journal of Chemical Theory and Computation, 11(11), 5525-5542 (2015). [https://doi.org/10.1021/acs.jctc.5b00743](https://doi.org/10.1021/acs.jctc.5b00743)\\n\\n5. Hoffmann, M., Scherer, M., Hempel, T., Mardt, A., de Silva, B., Husic, B. E., ... & No\\u00e9, F. *Deeptime: A Python library for machine learning dynamical models from time series data.* Machine Learning: Science and Technology, 3(1), 015009 (2021). [https://doi.org/10.1088/2632-2153/ac2a50](https://doi.org/10.1088/2632-2153/ac2a50)\\n\\n6. Wang, Y., Wang, T., Li, S., et al. *Enhancing geometric representations for molecules with equivariant vector-scalar interactive message passing.* Nature Communications, 15, 313 (2024). [https://doi.org/10.1038/s41467-023-43720-2](https://doi.org/10.1038/s41467-023-43720-2)\\n\\n7. Th\\u00f6lke, P., & De Fabritiis, G. (2022). *Torchmd-net: Equivariant transformers for neural network-based molecular potentials.* arXiv preprint arXiv:2202.02541. [https://arxiv.org/abs/2202.02541](https://arxiv.org/abs/2202.02541)\"}", "{\"comment\": \"Hi reviewers,\\n\\nThe authors have posted their rebuttals. Could you please check their responses and engage in the discussions? Please also indicate if/how their responses change your opinions.\\n\\nThanks,\\n\\nAC\"}", "{\"comment\": \"I didn't have very much time yet to look thoroughly through the responses and plan to do that later.\\n\\n**Regarding the Comparison of \\u2018Molecular Dynamics\\u2019**\\n\\nIt's clear that it is not comparable and might have been also a misunderstanding from my side. But why to choose the kinetic modeling task and not to check how good the pretraining strategy works, e.g., with respect to enhanced sampling techniques for speeding up molecular dynamics?\", \"the_problem_for_me_with_the_kinetic_modeling_task\": \"Are there any publications out on exactly this dataset? Or did you create the dataset yourself? As a reviewer, I would prefer to see independent publications having reported on one and the same dataset/benchmark to be better able to judge how much improvement over previous state-of-the-art there is now.\\n\\nWould it be possible that you try out your method on a task, where already several other independent researchers reported results or point me to publications, that reported results on exactly the kinetic dataset you looked at?\\n\\n**Regarding the Lack of Extensive Comparison**\\n\\nHaving investigated pretraining behavior on only two arbitrary (?) GNN architectures in general seems not very convincing to make general statements about pretraining, which should possibly be relevant to a larger field. Moreover one of the two architectures seems to be an extension of the other one as the authors write. Why did the authors not consider architectures like EGNN, SEGNN, PaiNN, Equiformer, NequIP, Allegro, MACE etc.? \\nWhy have the authors exactly chosen the two architectures they have chosen? I wonder, what the argumentation is why to compare two related architectures and to ignore others. In case the authors think that experiments for VisNet seem to be exceptionally important, they should possibly also restrict the validitiy of any statement in the title, the abstract and the paper towards this architecture or towards VisNet and ET only, and not refer to the more general term of \\\"Geom-GNNs\\\".\"}", "{\"comment\": \"We thank the reviewer for clarifying their new questions.\\n\\nIn **Table 3**, together with **Table 9 (Appendix)**, we reported the base GNN models (pre-trained or not pre-trained) on xxMD temporal split. From **Line 440\\u2013451**, we explained the results showing that only models pre-trained with non-equilibrium structures exhibit positive transfer. Another phenomenon we discussed in the text is that ET is a less-performing model compared to ViSNet. However, ET demonstrates positive transfer on 4/4 tasks, while ViSNet only shows positive transfer on 2/4 tasks, even though ViSNet outperforms ET in all tasks. In **Lines 450\\u2013451**, we further clarified that this observation can be attributed to the DFT data uncertainty. We do not aim to grid search for a set of hyperparameters to minimize the test error of a specific dataset as this is beyond the scope of the paper. But thanks for your suggestion and we will add a clarification sentence in conclusion.\\n\\nWe appreciate the reviewer for the amazing feedbacks to help improved the quality of the manuscript. As we have provided additional data and experimental results, and responded to all the comments, we hope the reviewer can change their score.\"}", "{\"comment\": \"Thank you to the authors for their hard work and thoughtful response in addressing the identified weaknesses. I will maintain my score as it stands, as I am not an expert in this field, and the initial score I provided is already sufficiently high from my perspective. Thank you again!\"}", "{\"comment\": \"Sorry, it has been a busy time for me. I have read the paper again in light of the review comments and responses to them.\\nOverall the author's research goals are more clear to me now. Especially it was not my intention \\\"to frame their work as a continuation of pre-training methods or graph scaling laws, despite our repeated clarifications\\\", but might have misunderstood it.\\n\\n> We kindly ask reviewer 5LXW to clarify the meaning of comparing kinetic modeling \\\"with respect to the speeding up of molecular dynamics\\\" or \\\"how effective the pre-training is.\\\" \\n\\nHere the authors must have misunderstood me completely with what I meant; However let's not dwell on it further.\", \"with_respect_to_the_dataset_for_kinetic_modeling_tasks\": \"I find it strange when one has to compare with a tutorial example. It raises concerns, whether this is far from real-world.\", \"with_respect_to_random_split_qm9\": \"Here the authors must have misunderstood me. I even marked it as \\\"Strengths\\\", that they considered not only random splitting.\\nHowever, even if results are overoptimistic, many researchers evaluated their methods for this benchmark (which does not mean, that I think it's necessarily a good dataset or benchmark, but), which could nevertheless give additional insights.\\n\\nI must acknowledge, however, that I am not an expert in each of the specific subfields from which the authors have drawn their datasets. Apparently, other reviewers found the experiments on the selected datasets to be scientifically more valuable than I did (or at least they did not question it). That is one reason why I am considering possibly raising the score.\", \"clarity_of_the_contributions\": [\"To me it seems that Geom-GNN is the same as shown in Figure 1 and refers to a specific network architecture. Is this true?\", \"Is Geom-GNN a new architecture, a generic term, or, an established architecture from somewhere else?\", \"If it is a new architecture, the authors should declare it as a contribution.\", \"If it is a generic term, then the authors should throughout the paper be more specific which network architecture they are actually referring to.\", \"If it is an established architecture from somewhere else, they should properly cite it.\"], \"there_are_further_questions_on_hyperparameter_selection\": [\"When an architecture with pretrained features is compared to an architecture with non-pretrained features, is the hyperparameter selection individual for each of the two cases? Maybe architectures with non-pretrained features need more layers. What is the authors' opinion on this?\"], \"question_on_scaffold_qm9\": \"Why are only 5 columns shown in Table 2? Does Scaffold-QM9 have less targets than QM9? If it does not have less targets, why did you skip the other targets?\", \"presentation_of_the_paper\": [\"There is a typo in the first sentence of the paper:\", \"In silico molecular computation and simulation are indispensable tool**s** in modern research for biology, chemistry, and material sciences capable of accelerating the discovery of new drugs and materials, as well as prediction of protein structures and functions\", \"This sentence seem weird to me:\", \"As we have studied a few preliminary application of pre-trained graph embedding, we wonder if the power of features given by Geom-GNNs with varying architectures and configurations.\", \"The paper is hard to read as many figures, etc. are in the appendix and it is not clear with the figure numbers, whether the figure is expected to be found in the main text or in the appendix\"]}", "{\"comment\": \"I am not super-convinced the paper should be accepted;\\nDue to the discussion with the authors their research goal became more clear to me. I have the impression that the relevance of this work is a little bit limited. The empirical investigations might have a clear novel aspect, although to me unclear how interesting they are to the community.\\n\\nAt the same time, my point of view might be subjective and I have to recognise that others see the manuscript much more positive. \\nSince I am not the person, who wants to block a potentially useful publication, I raise my score to 6.\", \"for_others_who_read_my_one_or_two_initial_reviews\": \"There might have been misunderstandings from my side. I do not change the text (and the initial ratings except the overall rating) at this stage any more, but please do not necessarily rely/refer to it.\"}", "{\"comment\": \"We believe it is very important to understand the representational power of the underlying 3D GNN models; thus, we thoroughly studied the self-supervised pre-training and supervised fine-tuning in the atom-scale tasks to estimate the resulted features. We also thank the reviewer for acknowledging all the novel results presented in the paper, where those pre-trained features can be generally applicable to a variety of downstream tasks such as embedding/conformational analysis, kinetic modeling, enhancing existing architectures as stronger features than conventional descriptors, and we thoroughly studied the scaling behaviors at different levels.\\n\\nRegarding the reviewer\\u2019s suggestion of adding an additional comparison of pre-trained models (apple) with non-pre-trained models with a different number of layers (pear), we believe this would not be a fair comparison as the variables are not controlled. Additionally, we are not hiding any details about hyperparameter selection, and we kindly refer the reviewer to **Appendix F** for all the hyperparameters.\\n\\nWe thank the reviewer for catching the formatting issues. We will thoroughly ensure that all the figures in the appendix are annotated as **Figure X (Appendix)** in the revised version.\"}", "{\"title\": \"Official Comment by Reviewer KcDu\", \"comment\": \"Thank you for the detailed response to my comments. I appreciate the clarifications provided regarding the experimental setups, the scope of the study, and the comparisons to related works. The authors\\u2019 comprehensive exploration of pre-trained and non-pre-trained 3D GNNs across diverse tasks, coupled with their use of rigorous dataset splits, thoroughly addresses my concerns. Additionally, I acknowledge that the distinctions between the supervised, self-supervised, and unsupervised setups have already been reiterated and clarified in the article. These points, combined with the authors\\u2019 thorough responses, address my concern, and I am willing to raise my score.\"}", "{\"summary\": \"This work presents a novel investigation into whether pre-trained Geom-GNNs (Graph Neural Networks for Conformational Molecules) possess efficient and transferable geometric representation capabilities, particularly in addressing the low generalization of models typically trained on specific tasks. The authors also aim to introduce \\u201cNeural Scaling Laws\\u201d to summarize the performance behavior of these pre-trained Geom-GNNs. However, it is unfortunate that the experimental results indicate that Geom-GNNs do not adhere to power-law scaling laws and fail to demonstrate predictable scaling behavior across various supervised tasks. Furthermore, the findings reveal that the all-atom embedding graph representations derived from Geom-GNNs exhibit excellent expressive capabilities, suggesting that Geom-GNNs can function effectively as zero-shot transfer learners.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"1. **Innovation**: The study presents a novel perspective, exploring an area that remains under-researched (to my knowledge), offering significant insights for the development of Geom-GNNs.\\n2. **Clarity and Structure**: The article is well-organized, with clear presentation and summary of experiments and viewpoints, facilitating reader comprehension.\\n3. **Robust Experimentation**: The experimental design is thorough, effectively supporting the authors\\u2019 conclusions.\\n4. **Exploration of Zero-Shot Transfer Capability**: The investigation into the zero-shot transfer ability of Geom-GNNs is intriguing, with experiments indicating their potential as excellent zero-shot transfer learners.\\n5. **Pre-training Insights**: Through extensive denoising pre-training tasks, valuable experiences have been gained regarding the pre-training of Geom-GNNs, including aspects such as model width, depth, aspect ratio, and the cutoff radius in geometric atomic graph construction, providing rich guidance for pre-training.\\n6. **Advancement of Unified Architecture**: Given the widespread attention and efforts in the research of all-atom Geom-GNNs, this study effectively inspires researchers to reconsider the design of Geom-GNN architectures and the adjustment of training strategies, thereby promoting the development of a unified Geom-GNN architecture.\", \"weaknesses\": \"1. In Figure 6, which explores different model widths, even though the x-axis represents the total number of parameters (with model depth held constant), it would be more beneficial to indicate the model width for each point in the legend to enhance result presentation. Similarly, in Figures 4 and 7, which demonstrate the impact of model depth, using the legend to specify the exact number of layers might be more effective. In general, clear legends are always advantageous.\\n2. The comparison between models trained from scratch and those fine-tuned in Section 6.1 could be more comprehensive if extended to include model depth. Previous discussions (albeit in the context of pre-training) have presented certain viewpoints, and it is anticipated that these would also have significant effects during fine-tuning.\", \"questions\": \"as Weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Further Discussion\", \"comment\": \"Dear Reviewer KcDu,\\n\\nWe hope our previous response has addressed your concerns. If there are remaining issues, we would be happy to discuss them further. We kindly ask if you could specify which points require clarification, ideally with more detailed setups or comparison to existing literatures. This would greatly assist us in refining our work constructively and addressing your feedback effectively.\\n\\nKind Regards,\\nAuthors\"}", "{\"title\": \"Further Discussion\", \"comment\": \"Dear Reviewer 5LXW,\\n\\nWe hope our previous response has addressed your concerns. If there are remaining issues, we would be happy to discuss them further. We kindly ask if you could specify which points require clarification, ideally with more detailed setups or comparison to existing literatures. This would greatly assist us in refining our work constructively and addressing your feedback effectively.\\n\\nKind Regards,\\nAuthors\"}", "{\"title\": \"Response to Reviewer KcDu Reply Part 2\", \"comment\": [\"### [2]\", \"**Figure 3:** Dataset size scaling of 2D GIN (pre-trained with GraphMAE using PCQM), SMILES (one-layer Transformer), and Fingerprint (one-layer Transformer) models on three randomly split MoleculeNet subsets (molecular property prediction).\", \"**Figure 4:** Pre-trained versus non-pre-trained 2D GIN on dataset size scaling for three randomly split MoleculeNet subsets (molecular property prediction).\", \"**Figure 5:** Dataset size scaling of 2D GIN on three MoleculeNet subsets with different splits (random, scaffold, and imbalance) (molecular property prediction).\", \"**Figure 6:** Dataset size scaling and parameter scaling of 2D GIN on randomly split MoleculeNet subsets (molecular property prediction). Aspect ratios were not controlled.\", \"**Figure 8:** Dataset size scaling of non-pre-trained 3D PaiNN, SphereNet, and SchNet models on three randomly split QM9 subsets (molecular property prediction).\", \"**Figure 9:** Similar to Figure 5, but using a one-layer Transformer with fingerprints.\", \"### [3]\", \"**Figures 2-3/10-12:** Dataset size scaling and parameter scaling of non-pre-trained 2D GIN and GCN models on PCQM and PPA datasets (molecular property prediction).\", \"**Figure 4:** Parameter scaling of non-pre-trained 2D GIN, GCN, SAT, and GPS models on PPA (molecular property prediction).\", \"**Figures 18/19:** Parameter scaling of non-pre-trained 2D GIN and GCN models on Reddit, HIV, and PCBA datasets (molecular property prediction).\", \"---\", \"### Comparison to Our Work\", \"In comparison to Literature [1]\\u2014the only 3D GNN scaling study\\u2014which focuses on force field tasks with non-pre-trained models in a supervised fashion, we have studied a wider variety of tasks beyond force field tasks, using both pre-trained and non-pre-trained models across supervised, self-supervised, and unsupervised settings. Additionally, we employed statistically more rigorous dataset splits for each task. Furthermore, we proposed the inference of all-atom embeddings in combination with other higher-order architectures.\", \"Compared to Literatures [2] and [3], which focus on 2D graph models, we conducted a comprehensive study of 3D GNNs across a variety of setups and tasks, as detailed earlier.\", \"## References:\", \"[1] Frey, Nathan C., et al. \\u201cNeural scaling of deep chemical models.\\u201d Nature Machine Intelligence 5 (2023): 1297\\u20131305. [2] Chen, Dingshuo, et al. \\\"Uncovering neural scaling laws in molecular representation learning.\\\" Advances in Neural Information Processing Systems 36 (2024). [3] Liu, Jingzhe, et al. \\\"Neural scaling laws on graphs.\\\" arXiv preprint arXiv:2402.02054 (2024).\"]}", "{\"title\": \"Global Rebuttal\", \"comment\": \"We sincerely thank the reviewers for their thoughtful and constructive feedback, which has greatly contributed to improving the quality and clarity of our manuscript. Below, we summarize the major updates and changes made in response to the reviewers\\u2019 comments:\\n\\n### Summary of Changes and Key Clarifications:\\n\\n1. **Research Question and Contributions:**\\n - We emphasized our central research question: \\n **\\\"Are pre-trained all-atom geometric graph neural network (GNN) representations transferable to protein modeling, and how expressive are they?\\\"**\\n - To address this, we studied the scaling behaviors of state-of-the-art geometric GNNs in unsupervised, self-supervised, and supervised setups and demonstrated their transferability to proteins with all-atom resolution.\\n\\n2. **Figure Revisions:**\\n - Based on the feedback from **Reviewer i3nd**, we revised Figures 4, 6, and 7 for improved clarity by annotating legends with specific model depths and widths where relevant. These updates are reflected in the manuscript.\\n\\n3. **Restructuring the Introduction and Abstract:**\\n - Per suggestions from multiple reviewers, we restructured the introduction and abstract sections to better highlight our research question, contributions, and the relevance of this study.\\n\\n4. **Additional Experimental Results:**\\n - Responding to feedback from **Reviewer i3nd**, we added experimental results comparing the effect of model depth and aspect ratio during fine-tuning on QM9. The results, summarized in **Appendix L, Table 10 (Page 32)**, show a general trend where pre-trained deeper models perform better than shallower ones, aligning with findings in prior literature (e.g., Li et al. 2024). \\n\\n5. **Clarifications on Zero-Shot Transfer and Pre-Training Objectives:**\\n - We addressed concerns from **Reviewer KCDU** and **Reviewer 5LXW** about the definition of zero-shot transfer learning. Our setup involves pre-training on small molecules and directly transferring embeddings to infer atomistic representations for proteins, which is then combined with separate tasks. We clarified why this qualifies as zero-shot embedding inference, distinct from fine-tuning.\\n - We elaborated on the rationale behind selecting coordinate denoising as the pre-training objective for its simplicity and effectiveness.\\n\\n6. **Benchmarks and General Evaluations:**\\n - We addressed concerns about dataset diversity by discussing scaffold splitting in QM9 and incorporating non-equilibrium datasets like xxMD. These ensure rigorous evaluation of generalization capabilities.\\n - Additional data on xxMD datasets comparing pre-trained and non-pre-trained models were provided in **Appendix L, Table 9 (Page 31)** to better support claims about model performance across dimensions and molecular systems.\\n\\n### Final Remarks\\n\\nWe hope these revisions and clarifications address the reviewers' concerns and further demonstrate the novelty and importance of our work. We remain grateful for the reviewers' insightful feedback, which has strengthened our manuscript significantly.\\n\\nThank you for your time and consideration.\"}", "{\"comment\": \"> From lines 033\\u2013039, we indicated that Geometric Graph Neural Networks (Geom-GNNs) refer to a class of GNNs that operate with coordinates. From lines 733\\u2013741, we briefly extend this discussion. Furthermore, in both the figure and caption of Figure 1, we clearly showed a framework where pre-trained Geom-GNNs act as local geometric descriptors to featurize residue-level conformations. We believe the reviewer has already noticed that we chose two architectures, ViSNet [6] and ET [7], as these were properly cited in the paper.\\n\\nIf the architecture in Figure 1 is an own contribution from the authors, I really recommend to give it a name and restrict considerations and conclusions to this architecture. There might be other options than the architecture in Figure 1, how to make use of pretrained features, and such other architectures could show a much different scaling behavior.\\n\\nDo the authors think that they are the first architecture at all, which makes use of pretrained geometric features?\\n\\n\\n> Furthermore, it is difficult to see the point of comparing an apple to a pear.\\n\\nI don't get the point on apples and pears. Is it an answer to my question on hyperparameter selection? \\nIt is the task of the authors to provide clear explanations and not hide what they did. It is important to know this for replicability and for judging and understanding their statements on scaling behaviors etc.\\\\\\nIf hyperparameter selection has been done differently for different types of pretrained features, pretrained and non-pretrained features, different datasets, this has to be described thoroughly.\\\\\\nIf it has not been done individually, it has also to be described and is possibly even more important to know.\\n\\n> We also clearly indicated in the paper which figures are expected to be found in the appendix as Figure X (Appendix). \\n\\nThis is not true. The authors should for example look at their sentence \\\"Per Figure 20, we observe improvement of integrating\\nsuch all-atom features into the ProNet without angle information.\\\" Figure 20 is somewhere in the appendix, although it is not declared in this sentence to be in the appendix.\"}", "{\"title\": \"Response to Reviewer 5LXW Part 1\", \"comment\": \"## Reply to Summary\\n\\nWe sincerely appreciate the reviewer **i3nd** for the feedback and comments to improve the quality of the paper. Before proceeding further, we think it is necessary to reformulate the summary to reiterate the motivation of our paper. Instead of extending denoising pre-training techniques, investigating the pre-training task choice, or developing model architectures, our paper aims to answer a previously unanswered research question in graph representational learning, which fits the ICLR venue: \\n**\\\"Are pre-trained all-atom geometric graph neural network (GNN) representations transferable to protein modeling, and how expressive are they?\\\"**\\n\\nTo answer this research question, our contributions are as follows: \\n1. **Scaling Behaviors of Geometric GNNs:** \\n We studied the scaling behaviors of state-of-the-art geometric GNNs in unsupervised, self-supervised, and supervised setups, instead of focusing on developing new model architectures or pre-training objectives. \\n2. **Demonstrating Transferability:** \\n We pre-trained these GNNs on small molecular datasets and demonstrated their transferability to proteins with all-atom resolution, highlighting their expressiveness in these settings.\\n\\n### Pre-training and Zero-Shot Transfer Learning\\n\\nIn terms of pre-training, we studied scaling behaviors with various model configurations. In the zero-shot transfer learning setup, we inferred the atomistic embeddings of all-atom peptides and proteins, coarse-grained to residue-wise embeddings, and organically combined them with other architectures for conformational kinetic modeling (VAMPNet) and fold classification tasks. In both tasks, pre-trained all-atom embeddings demonstrated excellent transferability.\\n\\n### Small Molecule Setups\\n\\nIn the small molecule setups, we studied molecular property prediction (QM9) and molecular force field prediction (xxMD) with both pre-trained and non-pre-trained models. In all the aforementioned setups, we did not find predictable power-law scaling (not just on the pre-training task). During pre-training, we observed that shallow models exhibit much higher pre-training loss compared to deeper models with similar parameter counts (hence, why we mentioned **under-reaching**). Additionally, we found that the benefits of increasing depth diminish after six layers (hence, why we mentioned **over-smoothing**).\\n\\n### QM9 and Kinetic Modeling Observations\\n\\nIn the QM9 experiments, we noted that when computing QM9 labels\\u2014such as the HOMO-LUMO gap\\u2014different quantum chemical methods produce ev-scale differences (Page 19, Figure 5). Therefore, we should not expect further improvements from machine learning models if they already reach the data uncertainty limit.\\n\\nIn contrast, in the kinetic modeling task (Page 5, Table 1), we found that the \\\"benefit\\\" of scaling in the no-mixer setup cannot compare to the improvement from adding a few layers of MLP or Transformer, which allows direct modeling of interdependence among structural units.\"}", "{\"title\": \"Kind Reminder Regarding Reviewer Comments\", \"comment\": \"Dear **Reviewer 5LXW**,\\n\\nAs the revision process is nearing completion, we would like to kindly remind you to review our responses to your comments, which addressed your concerns point by point. We hope our explanations have clarified any remaining gaps, as we have already addressed all other reviewers' comments and suggestions. \\n\\nIf there are any additional points to discuss, we would be happy to engage further, although we regret that we are unable to conduct additional experiments at this stage.\\n\\nThank you for your time and understanding.\\n\\nBest regards,\\nThe Authors\"}", "{\"comment\": \"Thank you very much for your insights and time engaging the discussion, and the recognition of the novelty regarding our empirical investigations.\"}", "{\"summary\": \"The paper investigates denoising pretraining and potential scaling laws for geometric graph neural networks (GNNs) on supervised tasks. These GNNs are pre-trained on noisy coordinate data to learn how to denoise and reconstruct the original coordinates. The effectiveness of this approach is tested on various downstream applications, including molecular kinetics, fold classification, and energy and force prediction. Additionally, the paper examines the scaling behavior of these models and highlights specific limitations in supervised prediction tasks.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. Demonstrates substantial performance improvements in kinetic modeling on VAMP score metrics by utilizing denoising pretraining techniques.\\n2. Applies the self-supervised pretraining approach to a variety of downstream tasks, successfully proving its effectiveness across applications.\\n3. Examines scaling laws in both standard equivariant models and pre-trained equivariant models, finding that even pre-trained models diverge from typical neural scaling laws due to issues like under-reaching and over-smoothing. The author suggests that, while scaling models has its benefits for supervised and unsupervised tasks, it may be more effective to focus on addressing data label uncertainty and using active token mixing to mitigate information bottlenecks.\", \"weaknesses\": \"1. The author claims to be the first to demonstrate the pre-trained Geom-GNNs\\u2019 capability as zero-shot transfer learners. However, I would consider this more appropriately described as downstream task fine-tuning. In zero-shot learning, a model test on data source is directly applied to unseen data class without additional training (e.g., training on English and French, then testing on Chinese without further adjustments). Unlike this approach for molecular kinetics, the paper involves training a separate network with the VAMP score objective.\\n2. The pretraining methods in this work are limited to coordinate denoising. Other approaches [1,2] that leverage both node mask prediction and coordinate denoising have already proven effective.\\n3. Although the paper demonstrates the effectiveness of ET and VisNet on several tasks, it does not include evaluations on invariant feature-based networks (such as SchNet, DimeNet, or GemNet) or tensor product-based networks like Equiformer and MAC.\\n\\n[1] Cui, Taoyong, et al. \\\"Geometry-enhanced pretraining on interatomic potentials.\\\"\\u00a0Nature Machine Intelligence\\u00a06.4 (2024): 428-436.\\n[2] Zhou, Gengmo, et al. \\\"Uni-mol: A universal 3d molecular representation learning framework.\\\" (2023).\", \"questions\": \"1. The paper claims to demonstrate zero-shot transfer learning using pre-trained Geom-GNNs. However, the described method seems closer to downstream task fine-tuning, given that a separate network is trained with the VAMP score objective. Could the authors clarify how this approach qualifies as zero-shot transfer rather than fine-tuning?\\n\\n2. Limited Pre-Training Approaches:\\nThe pre-training in this work is restricted to coordinate denoising. Given that prior work has successfully used a combination of node mask prediction and coordinate denoising for improved performance. Specifically, how might adding a node mask objective influence for the molecular kinetics tasks? Would it enhance the model\\u2019s ability to generalize across different molecular conformations?\\nAdditionally, could the authors hypothesize the potential impact of such an extended pre-training approach on scaling behavior? \\n\\n3. While the effectiveness of ET and ViSNet is demonstrated on several tasks, the study lacks comparisons with invariant feature-based networks (e.g., SchNet, DimeNet, GemNet) and tensor product-based networks (e.g., Equiformer, MAC). Could the authors provide insights into how their method might perform relative to these alternative architectures?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer 5LXW Part 2\", \"comment\": \"## Reply to the paper representation\\n \\nWe sincerely appreciate the reviewer 5LXW\\u2019s opinions on the paper structure, we restructured the introduction and the abstract of the paper to emphasize our RQ. \\n\\n## Reply to the Novelty Issues\\n\\nWe appreciate reviewer **5LXW** for their thoughtful and constructive feedback and the opportunity to clarify the novelty and contributions of our work.\\n\\n### On the Novelty of Our Study\\n\\nWhile previous studies, such as Zaidi et al. (2022) [1], have explored the benefits of pre-training in GNNs, our work differentiates itself in several significant ways:\\n\\n#### **Comprehensive Study of Scaling Behaviors in Geometric GNNs**\\n1. **First of Its Kind:** \\n To our knowledge, this paper presents the first extensive examination of scaling laws specifically for all-atom geometric GNNs in molecular learning and the first study on how pre-trained all-atom graph embeddings can be transferred to protein modeling tasks.\\n\\n2. **Wide Range of Configurations:** \\n We investigate not only model size but also aspect ratios, radial cutoffs, architectural choices, and pre-training datasets, providing practical insights into the design and configuration of geometric GNNs.\\n\\n3. **Diverse Applications:** \\n Our analysis spans multiple tasks, including kinetic modeling, protein folding classification, force field predictions, and quantum chemical property predictions, showcasing the versatility and limitations of scaling in different contexts.\\n\\n#### **Insights into Transferability and Expressiveness**\\nWe delve into how pre-trained all-atom graph embeddings transfer across various downstream tasks, studying the role of embedding dimensionality and the incorporation of token mixing modules (e.g., MLPs, Transformers, GNNs) for protein modeling. Our findings reveal:\\n- Simply increasing embedding dimensionality yields diminishing returns.\\n- Architectures that model interdependencies among structural units lead to significant performance gains.\\n\\n#### **Novel Perspectives and Benchmarks**\\n1. **Improved Evaluation Strategies:** \\n By employing scaffold splitting in QM9 and utilizing non-equilibrium datasets like xxMD, we challenge conventional evaluation methods, providing more rigorous assessments of model generalization capabilities.\\n\\n2. **Analysis of Scaling Benefits:** \\n We demonstrate that scaling up model parameters does not uniformly improve performance, particularly when data uncertainties impose fundamental limits\\u2014a nuance that prior work has not thoroughly explored.\\n\\n---\\n\\n### On Formal Connections to GNN Properties (Oversmoothing, Underreaching)\\n\\nWe acknowledge that we did not explicitly investigate formal properties like oversmoothing or underreaching, as these topics are not central to our study. Instead, our focus lies on:\\n- Empirical scaling behaviors.\\n- Transferability of pre-trained embeddings.\\n- Practical implications for molecular and protein modeling tasks.\\n\\nWe believe that understanding these empirical trends is a critical step before delving into formal theoretical analyses, providing a foundation for future investigations into these topics.\\n\\n[1] Zaidi, Sheheryar, Michael Schaarschmidt, James Martens, Hyunjik Kim, Yee Whye Teh, Alvaro Sanchez-Gonzalez, Peter Battaglia, Razvan Pascanu, and Jonathan Godwin. \\\"Pre-training via denoising for molecular property prediction.\\\" arXiv preprint arXiv:2206.00133, 2022\"}", "{\"title\": \"Thank you\", \"comment\": \"We appreciate the reviewer for engaging in the discussion and going through all the responses. We wish you a good break :)\\n\\nKind Regards,\\nAuthors\"}", "{\"title\": \"Response to Reviewer GND7 Part 2\", \"comment\": \"## Response to Concern About Lack of General Evaluations\\n\\nWe thank the reviewer for raising this point. Our study thoroughly examines the performance and scaling behavior of pre-trained GNNs across diverse tasks, including:\\n\\n- **Single-molecule conformational variety:** Force field regression and kinetic modeling. \\n- **Multi-molecule chemical variety:** Quantum property prediction. \\n- **Peptide/protein conformational variety:** Kinetic modeling. \\n- **Protein biological variety:** Folding classification.\\n\\nWe present over 100 pre-training experiments, testing two GNN architectures with varied depth, width, aspect ratio, and radius cutoff configurations on equilibrium and non-equilibrium datasets. Key experiments include:\\n\\n1. **VAMPNet:** Three systems (molecules to proteins) tested with different embedding dimensions and token mixers. \\n2. **Folding classification:** ProNet (with/without pre-trained all-atom embeddings) tested across dimensions, representing the most comprehensive study in relevant literature. \\n3. **Force field prediction:** Two GNNs (non-pre-trained and pre-trained on PCQM and Denali) evaluated across hidden dimensions on 4 molecular systems from xxMD. \\n4. **Molecular property prediction:** Two GNNs (non-pre-trained and pre-trained on PCQM) evaluated across hidden dimensions on 5 tasks.\\n\\n### Key Insights About Scaling\\n\\n#### **Embedding Transfer for Protein Modeling** \\nPre-trained embeddings without token mixing (e.g., simple summation) show fictitious scaling benefits due to higher-dimensional embeddings retaining more information. Introducing token mixers (e.g., MLPs or Transformers) enables structural unit interdependence modeling, significantly boosting performance. While larger models show slight gains with mixers, there are no sustained scaling benefits, as confirmed by folding classification results (Page 30, Figure 20).\\n\\n#### **Label Uncertainty Bottlenecks** \\n\\n1. **xxMD:** Pre-training benefits vary. ViSNet, already near the label uncertainty limit, shows minimal pre-training gains (2 of 4 sets benefit), while the ET architecture shows consistent gains across all 4 sets (Page 9, Table 3; Page 32, Table 9). Scaling data, rather than models, proves more impactful in low-data regimes. \\n\\n2. **QM9:** For tasks like quantum chemical property prediction (e.g., homo-lumo gap), the inherent uncertainty in labels limits test-set accuracy regardless of pre-training or model expressiveness. For example, common density functionals exhibit ev-scale differences (Page 16, Figure 5), making milli-ev-scale accuracy unrealistic for QM9.\\n\\n### Additional Data Provided\\n\\nWe also include xxMD-temporal benchmark data (dimensions 64\\u2013384) across all setups (non-pre-trained, pre-trained on PCQM/Denali) in the appendix (Page 32, Table 9).\"}", "{\"comment\": \"> Regarding the reviewer\\u2019s suggestion of adding an additional comparison of pre-trained models (apple) with non-pre-trained models with a different number of layers (pear), we believe this would not be a fair comparison as the variables are not controlled. Additionally, we are not hiding any details about hyperparameter selection, and we kindly refer the reviewer to Appendix F for all the hyperparameters.\\n\\nI am not suggesting new comparisons.\\nBut, I wonder, whether it **has to be declared** as a limitation of your work.\\n\\n1.\\n\\nIn case you are purely doing a comparison of a fixed network architecture and other hyperparameters (up to those network architecture design parameters and hyperparameters of study) once with pre-trained features and once without and study, e.g. the effect of model width, it seems ok to my understanding.\\n\\n2.\\n\\nHowever, I wonder, e.g., for Table 3. In case you are doing a general comparison in the way like \\\"Do pretrained features have advantages over non-pretrained features\\\", then one would expect an individual hyperparameter optimization for both of these options. I am not absolutely sure, what the goal with Table 3 exactly was, though.\\n\\nto be more clear, what I mean, I formulate it as an exemplary question to you: Consider you would need to give advice to a machine learning engineer, whose task is to train a model with a minimum validation error. The engineer can either take pretrained features or non-pretrained features. Would you recommend the engineer to a) optimize hyperparameters once for pretrained and once for non-pretrained features or would you recommend b) that the best hyperparameters found with one type of the features also have to be used for the other type of features?\\n\\nIf you opt for a) then you should possibly declare lack of individual hyperparameter optimization as a limitation of your work, in case your goal is/was to show that one type of features is better than the other one. Otherwise it could be misleading to my opinion.\"}", "{\"title\": \"Response to Reviewer 5LXW Part 3\", \"comment\": \"## Reply to Experiments\\n\\n### Regarding the Comparison of \\u2018Molecular Dynamics\\u2019\\n\\nFirst of all, we are unsure about what \\u2018molecular dynamics\\u2019 the reviewer **5LXW** is referring to. Timewarp [1] is an enhanced sampling technique that proposes larger MD timesteps to accelerate the simulation, which is not directly comparable to the kinetic modeling task (if this is what the reviewer **5LXW** refers to) included in this paper. Could the reviewer clarify what we are trying to compare here?\\n\\n---\\n\\n### Regarding the Splitting and Comparison with More Models\\n\\nAs we have explained in the paper, randomly splitting the data without considering generalizability is not meaningful. Scaffold splitting is prevalent in benchmarks such as MoleculeNet. Furthermore, the xxMD paper [2] contains the results of DimeNet++, MACE, etc.\\n\\n---\\n\\n### Regarding the Lack of Extensive Comparison\", \"we_carefully_examined_the_performance_and_scaling_behaviors_of_different_pre_trained_gnns_across_diverse_tasks\": \"- **Conformational variety of single molecules** (force field regression, kinetic modeling).\\n- **Chemical variety of many molecules** (quantum chemical property prediction).\\n- **Conformational variety of peptides and proteins** (kinetic modeling).\\n- **Biological variety of many proteins** (folding classification).\\n\\nIn detail, we provided more than 100 data points of pre-training experiments covering two GNN architectures with variations in model depth, width, aspect ratio, and radius cutoff configurations on both equilibrium and non-equilibrium molecular datasets:\\n- **xxMD Experiments:** We tested two GNNs (non-pre-trained and pre-trained on PCQM and Denali, respectively) across hidden dimensions on four molecular systems.\\n- **QM9 Experiments:** We tested two GNNs (non-pre-trained and pre-trained on PCQM) across hidden dimensions on five tasks.\\n- **VAMPNet Experiments:** We evaluated three systems (molecules to proteins) with different embedding dimensions and token mixers.\\n- **Folding Classification Task:** We tested ProNet with and without pre-trained all-atom embeddings across different dimensions, representing the most comprehensive and detailed study among relevant works.\\n\\n---\\n\\n### Regarding the xxMD-Temporal Benchmarks\\n\\nIn the caption of **Table 2 (Page 8)**, we indicated that we compared models pre-trained (with prefix PT) and not pre-trained across different dimensions and targets. In **Table 3 (Page 9)**, we noted that only models pre-trained on the Denali dataset (PT-Denali) could positively transfer. ViSNet, without being pre-trained, performs better on xxMD-Azobenzene and dithiophene subsets.\\n\\nTo better support our claims, we have added an additional table in the appendix with the complete results. We summarized the ViSNet/ET results on azobenzene, stilbene, malonaldehyde, and dithiophene across 64, 128, 256, and 384 dimensions with no pre-training, pre-trained on PCQM, and pre-trained on Denali setups in **[Appendix L, Page 31, Table 9]**.\\n\\n## Reply to minor points\\nAs we have explained in the paper (Page 5 Line 237-240) and the main figure (Page 2 Figure 1), *In each window, atomic structures are treated as individual graphs and processed by the pre-trained Geom-GNN to extract atomic-level features, which are aggregated into residue-level representations or \\u201ctokens.\\u201d The architecture can employ self-attention (SA), multi-layer perceptron (MLP), or message-passing mechanisms to enhance representational power.*\\n\\n## Reply to zero-shot definition \\nUsing masked language modeling as an example, language models trained to complete the sentence in English can transfer to complete the sentence in Chinese. (Same task, different data domain) In our setup, the backbone network is trained on small molecules with roughly 10-20 heavy atoms with denoising objective, and the backbone network is transferred to infer the atomistic embedding of peptides and proteins, and then a separate head is trained with a different objective. (Different task, different data domain) Since the backbone network has never seen protein systems, we think it\\u2019s appropriate to claim it as zero-shot transfer for embedding inference.\\n\\n## Reply to pre-training vs. fine-tuning\\nPre-training datasets are broad and diverse, aiming to build general representations, while downstram datasets are task-specific and focused, helping the model adapt to particular applications.\\n\\n [1] Klein, Leon, Andrew Foong, Tor Fjelde, Bruno Mlodozeniec, Marc Brockschmidt, Sebastian Nowozin, Frank No\\u00e9, and Ryota Tomioka. \\\"Timewarp: Transferable acceleration of molecular dynamics by learning time-coarsened dynamics.\\\" Advances in Neural Information Processing Systems (NeurIPS), vol. 36, 2024.\\n [2] Pengmei, Zihan, Liu, Junyu, and Shu, Yinan. \\\"Beyond MD17: the reactive xxMD dataset.\\\" Scientific Data, vol. 11, no. 1, 2024, p. 222. Nature Publishing Group UK London.\"}", "{\"title\": \"Response to Reviewer KcDu Reply Part 1\", \"comment\": \"We thank the reviewer KcDu for their suggestion regarding the presentation, and we will clarify in the paper that the supervised, self-supervised, and unsupervised setups pertain solely to the 3D representational models. Similarly, we associate the unsupervised setups with kinetic modeling and fold classification, while the representational model remains fixed during embedding inference.\\n\\nWe would like to reiterate that while pre-training on PCQM datasets is feasible, testing the resulting models across the comprehensive evaluations and tasks in our paper is not. We also encourage the reviewers to focus on our research question: **\\\"Are pre-trained all-atom geometric graph neural network (GNN) representations transferable to protein modeling, and how expressive are they?\\\"**\\n\\n## Reply to Scaling Behavior\\n\\nWe respectfully disagree with the reviewer KcDu\\u2019s comments regarding our limited experimental setups, the lack of comparison with more pre-training methods and models, and the claim that our results are not generally or broadly applicable. We would like to remind reviewers that the models discussed in [1] and [3] are not pre-trained, and that the models in [2] are pre-trained solely using the GraphMAE objective. Moreover, the literature focuses on specific supervised task types and objects, limited to small molecules, such as molecular property prediction and force field regression. Per our reply to the reviewer **GND7**, our study thoroughly examines the performance and scaling behavior of pre-trained GNNs across diverse tasks, including:\\n\\n- **Single-molecule conformational variety:** Force field regression and kinetic modeling. \\n- **Multi-molecule chemical variety:** Quantum property prediction. \\n- **Peptide/protein conformational variety:** Kinetic modeling. \\n- **Protein biological variety:** Folding classification.\\n\\nWe present over 100 pre-training experiments, testing two GNN architectures with varied depth, width, aspect ratio, and radius cutoff configurations on equilibrium and non-equilibrium datasets. Key experiments include:\\n\\n1. **VAMPNet:** Three systems (molecules to proteins) tested with different embedding dimensions and token mixers. \\n2. **Folding classification:** ProNet (with/without pre-trained all-atom embeddings) tested across dimensions, representing the most comprehensive study in relevant literature. \\n3. **Force field prediction:** Two GNNs (non-pre-trained and pre-trained on PCQM and Denali) evaluated across hidden dimensions on 4 molecular systems from xxMD. \\n4. **Molecular property prediction:** Two GNNs (non-pre-trained and pre-trained on PCQM) evaluated across hidden dimensions on 5 tasks.\\n\\nSince the reviewer KcDu mentioned the literatures [1], [2], and [3], we first provide a brief overview of the key experiments and setups in those works to facilitate a discussion:\\n\\n### [1]\\n- **Figure 3:** Training budget scaling of non-pre-trained 3D SchNet, PaiNN, and SpookeyNet models on the MD17 datasets with 10,000 randomly split samples, varying batch size and learning rate (force field task). The authors claimed that longer training times yield better performance. However, as criticized by the authors of the xxMD paper, testing models on randomly split MD17 datasets leads to unreliable results due to the strong correlation between train, validation, and test sets.\\n- **Figures 5/A.3/A.4:** Dataset size scaling and parameter scaling of non-pre-trained 3D SchNet, PaiNN, and Allegro models on randomly split ANI1 datasets (force field task). Importantly, model aspect ratio and radius cutoff were not controlled. In contrast, we explored these two aspects in our work and found them to be important hyperparameters.\"}", "{\"metareview\": \"This paper studies if the pre-trained all-atom geometric GNN representations are transferable to protein modeling and their expressivity.\\nTo answer this, the paper studies the scaling behaviors of state-of-the-art geometric GNNs in unsupervised, self-supervised, and supervised setups and demonstrates their transferability to proteins with all-atom resolution. The authors explore many aspects of geometric GNNs, like model size, aspect ratio, nearest neighbor cutoff radius, architecture, and transferability among different data types, which is comprehensive and distinguishes the work from other studies. Overall, I think the work is interesting, novel, and useful to the field of geometric GNNs. Thus, an acceptance is recommended.\", \"additional_comments_on_reviewer_discussion\": \"Among four reviewers, three acknowledged their concerns were successfully addressed during the discussion period. The other reviewer 5LXW had a long discussion with the authors. In her/his last comment, Reviewer 5LXW acknowledged she/he is not an expert in the field, so she/he did not want to change initial ratings ( except the overall rating which is 6) but didn't wish to block the potentially useful publication as well. I checked all the discussions and believe most of the concerns have been addressed.\"}", "{\"title\": \"Response to Rebuttal by Authors\", \"comment\": \"Thank you for your detailed response. However, I still have some concerns based on your explanations.\\n\\n# Zero-shot Transfer Learners\\n\\nI agree with your claim only to the extent that it represents \\\"zero-shot transfer for embedding inference.\\\" If this is indeed what you mean, I strongly recommend explicitly clarifying this in the paper to prevent confusion among readers. Please highlight that the \\\"zero-shot transfer\\\" applies specifically to embedding inference, as the current phrasing may mislead readers into thinking it applies to the entire model.\\n\\nHowever, I maintain that this setup aligns more closely with a standard downstream fine-tuning approach. The model is pre-trained using self-supervised tasks and subsequently fine-tuned with task-specific heads. For example, the kinetic modeling task, as you described, involves further training on the VAMP score. This implies that the entire network(with the backbone parameter frozen) used for the task is trained specifically for this task and has seen task-specific data during fine-tuning. Therefore, using the term \\\"Zero-shot Transfer Learning\\\" throughout the paper without clear qualifications may be misleading. I advise you to avoid such potentially thrilling terminology without sufficient explanation and to be precise about where and how zero-shot transfer applies in your work.\\n\\n\\n# Scaling Behavior\\n\\nI raised concerns regarding the experimental settings, particularly about why other pretraining objectives and models were not utilized. While I understand that pretraining on the PCQM4M dataset is computationally expensive and it may not be feasible to conduct additional experiments during the review, I believe this limitation affects the paper's broader claims.\\nAs an experiment-driven paper, such conclusions should be supported by broader experimental settings and benchmarks. It is insufficient to simply discuss how information is processed and aggregated and then claim that models are philosophically similar. Such statements do not substantiate general claims about scaling behaviors.\\n\\nThe experimental settings in this work are limited and do not justify general conclusions. Furthermore, as cited by the authors, existing works [1,2,3] have already explored scaling laws on graphs, testing a range of models to ensure the generalizability of their findings. A similar approach would strengthen the current paper's claims, ensuring that its results are broadly applicable \\n\\n## References:\\n\\n[1] Frey, Nathan C., et al. \\u201cNeural scaling of deep chemical models.\\u201d Nature Machine Intelligence 5 (2023): 1297\\u20131305.\\n[2] Chen, Dingshuo, et al. \\\"Uncovering neural scaling laws in molecular representation learning.\\\" Advances in Neural Information Processing Systems 36 (2024).\\n[3] Liu, Jingzhe, et al. \\\"Neural scaling laws on graphs.\\\" arXiv preprint arXiv:2402.02054 (2024).\\n\\n# Additional Minor Points\\n\\nThe paper claims to characterize the scaling behaviors of Geometric GNNs in unsupervised setups. However, I could not find any unsupervised tasks presented or analyzed across the paper. This claim is mentioned in the abstract and conclusion but is unsupported in the main text.\"}", "{\"comment\": \"Hi Reviewer 5LXW,\\n\\nPreviously, you mentioned, \\\"I didn't have very much time yet to look thoroughly through the responses and plan to do that later.\\\" Now the discussion stage is ending, and I hope you can take some time to acknowledge the authors' response.\\n \\nAt your earliest convenience, could you check the response from the authors, and then indicate if it changes your opinions or not?\\n\\nBest,\\n\\nAC\"}", "{\"title\": \"Response to Reviewer i3nd\", \"comment\": \"## Summary\\n\\nWe sincerely appreciate the reviewer **i3nd** for the careful proofreading of our paper, which have helped improve the quality of the manuscript. our paper aims to answer a previously unanswered research question in graph representational learning: **\\\"Are pre-trained all-atom geometric graph neural network (GNN) representations transferable to protein modeling, and how expressive are they?\\\"**\\n\\nTo answer this research question, our contributions are as follows:\\n1. **Scaling Behaviors of Geometric GNNs:** \\n We studied the scaling behaviors of state-of-the-art geometric GNNs in unsupervised, self-supervised, and supervised setups, rather than focusing on creating new architectures or pre-training objectives. \\n2. **Demonstrating Transferability:** \\n We pre-trained these GNNs on small molecular datasets and demonstrated their transferability to proteins with all-atom resolution, highlighting their expressiveness in these settings.\\n\\n### Regarding the figure formatting\\n\\nWe took the reviewer's advice and revised the figures accordingly in the manuscript for clearer representation. Per other reviewers\\u2019 suggestions, we also restructured the introduction and abstract sections to highlight our RQ. \\n\\n### Regarding the comparison of model depth\\n\\n\\nWe appreciate the reviewer for suggesting comparing the effect of model aspect ratio in the fine-tuning stage. We did the following extra experiments on QM9 to illustrate the effect by fixing the parameter count and varying the depth of the model (this table is integrated in Appendix L as Table 10 on Page 32):\\n\\n| **Layer-Dim** | **Setup** | **\\u03b5_LUMO** | **\\u0394\\u03f5** |\\n|---------------|------------|------------|--------|\\n| **6L-128** | No PT | 42.8 | 94.5 |\\n| | PT on PCQM | 35.9 | 80.7 |\\n| **5L-144** | No PT | 46.1 | 104.8 |\\n| | PT on PCQM | 39.5 | 88.3 |\\n| **4L-160** | No PT | 45.6 | 98.5 |\\n| | PT on PCQM | 39.8 | 96.7 |\\n| **3L-184** | No PT | 44.5 | 98.4 |\\n| | PT on PCQM | 44.1 | 103.0 |\\n| **2L-216** | No PT | 43.5 | 97.6 |\\n| | PT on PCQM | 46.6 | 96.0 |\\n\\nInterestingly, we found there is no obvious trend of aspect ratio effect on the results of blank models, but there is a trend for pre-trained models where deeper models perform better than shallower models. We would also like to refer the reviewer to Figure 2 in Li et al. 2024 [1], where deeper models generally perform better than shallower models on force field tasks. \\n\\n[1] Li, Yunyang, Yusong Wang, Lin Huang, Han Yang, Xinran Wei, Jia Zhang, Tong Wang, Zun Wang, Bin Shao, and Tie-Yan Liu. \\\"Long-short-range message-passing: A physics-informed framework to capture non-local interaction for scalable molecular dynamics simulation.\\\" arXiv preprint arXiv:2304.13542, 2023.\"}", "{\"title\": \"Response to Reviewer GND7 Part 1\", \"comment\": \"## Summary\\n\\nWe sincerely appreciate reviewer **GND7** for their thoughtful feedback, which has significantly improved the manuscript's quality. We are grateful for their recognition of our contribution and the novelty of using self-supervised geometric graph neural networks (GNNs) as general molecular/amino acid descriptors containing all-atom information.\", \"our_paper_addresses_a_previously_unanswered_research_question_in_graph_representational_learning\": \"**\\\"Are pre-trained all-atom geometric GNN representations transferable to protein modeling, and how expressive are they?\\\"** \\n\\nRather than extending denoising pre-training techniques or focusing on pre-training task choices or model architectures, our contributions are as follows:\\n\\n1. **Scaling Study of Geometric GNNs:** \\n We examined the scaling behaviors of state-of-the-art geometric GNNs across unsupervised, self-supervised, and supervised setups, focusing on their general applicability rather than developing new architectures or pre-training objectives.\\n\\n2. **Transferability to Protein Modeling:** \\n We pre-trained these GNNs on small molecular datasets and demonstrated their transferability to protein modeling tasks with all-atom resolution, showcasing their expressiveness in these settings.\\n\\nPer the reviewers\\u2019 suggestions, we restructured the abstract and introduction to emphasize this research question.\\n\\nTo explore the expressiveness of these representations, we thoroughly evaluated their performance and scaling across various tasks, including: \\n- **Kinetic Modeling/Markov State Modeling** \\n- **Protein Folding Classification** \\n (trained on equilibrium small molecules, transferred to non-equilibrium and equilibrium protein structures) \\n- **Molecular Force Field and Property Predictions** \\n (both fine-tuning and training from scratch)\\n\\n## Response to Concerns About Limited Molecular Diversity in Evaluation\\n\\nWe thank the reviewer for highlighting this important concern. We agree that QM9, while historically significant, is a limited benchmark due to its age and the prevalent use of random splitting strategies, which favor memorization over generalization. To address this, we adopted scaffold splitting (Page 8, Table 2), as MoleculeNet [1], to better evaluate generalization across molecular scaffolds. Under this more rigorous evaluation, our results show that increasing the number of parameters does not consistently improve performance, in contrast to prior findings.\\n\\nWe also concur that **xxMD**, which includes non-equilibrium molecular trajectories, is a more informative benchmark than MD17/rMD17. As noted in the xxMD paper [2] (Figure 2 and S4), random splits of poorly sampled trajectories result in minimal distinction between training and test sets.\\n\\nWhile we appreciate the reviewer\\u2019s suggestions to explore other benchmarks such as Polaris and TDC, these datasets provide only 2D SMILES strings, which are incompatible with geometric GNNs that require 3D molecular information. Furthermore, we are unaware of prior work benchmarking geometric GNNs on these datasets. Single conformers in drug-like property prediction pose additional challenges due to stereochemistry (e.g., chirality) significantly influencing functionality. The limited availability of 3D molecular benchmarks reflects the current focus of all-atom geometric GNNs on quantum chemical property prediction.\\n\\nTo address these gaps, we provide:\\n- Insights on pre-training GNNs and leveraging their embeddings for downstream tasks.\\n- Guidance on integrating these embeddings with other architectures in protein modeling (Figure 2/3/14\\u201320, Table 1).\\n- A new section (Appendix M, Page 31\\u201332) discussing optimal model configurations and dataset splitting strategies for robust evaluation.\\n\\n[1] Wu, Zhenqin, Ramsundar, Bharath, Feinberg, Evan N., Gomes, Joseph, Geniesse, Caleb, Pappu, Aneesh S., Leswing, Karl, and Pande, Vijay. \\\"MoleculeNet: a benchmark for molecular machine learning.\\\" Chemical Science, vol. 9, no. 2, 2018, pp. 513\\u2013530. Royal Society of Chemistry.\\n[2] Pengmei, Zihan, Liu, Junyu, and Shu, Yinan. \\\"Beyond MD17: the reactive xxMD dataset.\\\" Scientific Data, vol. 11, no. 1, 2024, p. 222. Nature Publishing Group UK London.\"}", "{\"summary\": \"In this paper the authors extend previous work on scaling laws for all-atom graph neural networks applied to self-supervised and supervised training. They investigate aspects of pre-training task choice, different downstream evaluations, GNN model size and aspect ratio, as well as the radial cutoff for constructing nearest neighbor graphs.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"The paper continues an important line of work in exploring the utility of pre-training and scaling GNNs for molecular learning tasks. Unlike other dominant areas of deep learning, all-atom molecular representation learning relies on GNNs and does not directly benefit from advances in scaling sequence-based models.\\nThe authors explore model size, aspect ratio, nearest neighbor cutoff radius, and architecture to provide a comprehensive look into the scaling behavior of molecular GNNs.\", \"weaknesses\": \"Like most molecular GNN works, the authors are limited by the available evaluations (e.g., QM9). xxMD is an interesting evaluation, but these datasets are limited to a small set of specific molecules. QM9 with B3LYP in particular is not informative, and the authors might consider newer benchmarks like POLARIS or a subset of the Therapeutic Data Commons to strengthen their evaluations.\\nThe paper does not offer a clear and concise summary of recommendations based on the empirical findings, which is essential to achieving the stated aim of inspiring practitioners to rethink GNN training.\", \"questions\": \"Many of the evaluations are on datasets for specific molecules, which are very useful for understanding specific model behavior, but should be complemented by more general evaluations, especially in the context of examining scaling behavior. Can the authors comment on or provide additional evidence that these specific, bespoke evaluations are connected to more general relationships between pre-training setups and downstream evaluations?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"In their work the authors empirically studied properties of pre-trained geometric GNNs (i.e., GNNs with coordinates attached).\\nThey especially seemed to consider pretraining via denoising as introduced by Zaidi et al. (2022).\", \"the_authors_consider_several_downstream_tasks\": \"molecular dynamics (kinetics modeling), fold classification, prediction of quantum-chemical properties, and, molecule conformations\\nThey study properties such as power-law scaling and find that the geomtric GNNs do not follow that on the pre-training task.\\nThe authors conclude that geometric GNNs \\\"saturate early and are bounded by issues such as under-reaching and over-smoothing\\\" and further that \\\"other aspects become more crucial, such as addressing data label uncertainty and employing active token mixing to mitigate the information bottleneck.\\\"\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": [\"relevant research topics in the paper:\", \"pretraining of (geometric) GNNs\", \"scaling behavior of GNNs\", \"oversmoothing, underreaching\"], \"experiments\": [\"experiments considered not only random splitting, but also scaffold and temporal splits\"], \"weaknesses\": \"In general the largest problem to my opinion is that the authors lack to specify a clear research goal. Methodologically there seems not too much novelty.\\nBut if it is primarily an empirical research paper it seems very important to me that the research goal is clearly defined and dependent on that authors have\\nto reason why they are selecting the problems they look at and why they are selecting the methods they compare to. If a new benchmark dataset is used (as e.g., scaffold QM9), then empirical evaluations/comparisons should take into account at least some previously suggested methods (which are ideally complementary to each other to better see the potentials of each of the methods on the new benchmark and being able to compare to what authors say).\\n\\\\\\n\\\\\", \"details\": \"\\\\\\n\\\\\\n**hard to see the novelty in the paper**:\\\\\\nFact that pretraining is useful was already found e.g. by Zaidi et al. (2022).\\\\\\nNo new methodology seems to be suggested (or is \\\"token mixing\\\" the new method)?\\\\\\nStudy of scaling behavior might be interesting and to a certain degree novel for geometric GNNs, but no follow-up investigations seemed to be employed to draw formal connections to GNN properties like oversmoothing, underreaching, etc.\\n\\\\\\n\\\\\\n**paper is structured in a strange way, which makes it hard to follow the paper**:\\\\\\nIt is actually hard to understand what the research aim of the authors was.\\\\\\nchapter 4, 5, and, 6, actually seem to be about empiricial experiment results.\\\\\\nThe setups are partly however already explained in chapter 3.\\\\\\nchapter 4 and 6 show performances on problems\\\\\\nchapter 5 however studies power-law behavior and other ablations.\\\\\\nIn sum it gets hard to follow the paper. Better reasoning why some experiments are applied at all (with respect to the general research goal) and why they are done as they are done is necessary (e.g., why which methods are selected for comparison or why which dataset is used).\\n\\\\\\n\\\\\\n**experiments**:\\\\\\nalthough there are some good points as mentioned in strengths (such as the splits), it will get hard to understand how large the impact of pretraining really is, as the authors only test very few method once with pretraining and once without pretraining.\\\\\\n\\\\\\nalso the there is no good argument in the paper, why the authors exactly compare to those methods they selected to compare to \\ne.g.:\\\\\\nfor molecular dynamics, why don't they compare to Timewarp\\\\\\nfor QM9 and xxMD they could e.g. compare to \\\"E(n) Equivariant Graph Neural Networks\\\", \\\"SE(3)-Transformers\\\", DimeNet++, MACE, etc.\\\\\\nAn option to get more impression on the significance of the author's results would be to additionally compare to standard QM9, etc. (where there are also a lot of method comparisons out).\\n\\\\\\n\\\\\\n**minor points**:\\\\\\n\\\"token mixing\\\" not defined, but heavily used\\\\\\ngrammatical errors/typos: \\\"In silico molecular computation and simulation are indispensable tool in...\\\"\", \"questions\": [\"Why is the VAMP task considered zero-shot?\", \"Appendix D: Why is there a difference between datasets for pretraining and downstream? According to Table 2, QM9 there seem also experiments with pretraining. Are models in Table 3 not fine-tuned?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Further discussion\", \"comment\": \"Dear Reviewer GND7,\\n\\nWe just would like to ask if our response has addressed your concerns? If not, we are very happy to discuss and further clarify. \\n\\nKind Regards,\\nAuthors\"}", "{\"title\": \"Thank you\", \"comment\": \"We really appreciate your helpful reviews and recognition of the novelty. We wish you a good break :)\\n\\nKind Regards,\\nAuthors\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"Response to Reviewer 5LXW Reply\", \"comment\": \"### Reply to the Comparison of \\u2018Molecular Dynamics\\u2019\", \"we_chose_the_kinetic_modeling_tasks_for_several_key_reasons\": \"1. **Transferability Assessment:** These tasks allowed us to examine the transferability of pre-trained GNNs on molecule/protein conformational diversity that lies entirely outside the pre-training domain. \\n2. **Relevance to the Molecular Dynamics Community:** Kinetic modeling and collective variable (CV) learning have a long history and remain areas of broad interest within the molecular dynamics community. \\n\\nWe selected the alanine dipeptide and pentapeptide datasets from the publicly available MDShare website. Alanine dipeptide is a simple toy system, where the two backbone dihedrals serve as well-established CVs. Pentapeptides, on the other hand, are widely studied in the relevant literatures. This dataset is used as a beginner\\u2019s tutorial in tools like PyEMMA and DeepTime, as introduced by No\\u00e9 et al. Indeed, we designed **Figure 3 (Page 6)** to be analogous to the PyEMMA tutorial, as noted in the footnote on **Page 5**, allowing the reviewer to directly compare our results with those in that tutorial.\\n\\nWe also chose the lambda6-85 dataset from Folding@home because it is publicly available, enabling us to illustrate the transferability of pre-trained embeddings to much larger systems. Furthermore, the original paper on lambda6-85 proposed specific CVs that we compared in **Figure 16 (Page 26)**, which itself contains non-trivial efforts. While we acknowledge that long trajectories of fast-folding proteins from D.E. Shaw [1] are commonly used for benchmarking, these datasets are not publicly available, which does not comply with the ICLR guidelines.\\n\\nWe kindly ask reviewer 5LXW to clarify the meaning of comparing kinetic modeling \\\"with respect to the speeding up of molecular dynamics\\\" or \\\"how effective the pre-training is.\\\" If the reviewer is referring to how the learned CVs could be used for biased dynamics, we refer them to related literature such as [2][3]. If the reviewer is instead referring to how effective pre-training is for feature extraction, we suggest consulting **Figure 17 (Page 27)**, which inspects the pre-trained feature space, in conjunction with all the results presented in the main text.\\n\\n---\\n\\n### On the Choice of ViSNet\\n\\nAs explained earlier, we selected ViSNet due to its outstanding performance and efficiency. As indicated by Wang et al. [4] in the original ViSNet paper, it outperforms all the models mentioned by reviewer 5LXW on numerous datasets, including molecular property prediction and force field prediction, and does so with significantly lower computational costs. This is especially evident when compared to group-equivariant models such as Equiformer, NequIP, Allegro, and MACE. \\n\\nMoreover, ViSNet has been carefully benchmarked on a real protein system, **chignolin**, where it significantly outperforms group-equivariant networks like MACE across various conformational states. It is unclear how additional comparisons would benefit practitioners looking to apply all-atom 3D GNNs to protein modeling.\\n\\nAt the same time, ET (Equivariant Transformer), which is shipped with the TorchMD package [5], has been widely applied in protein dynamics modeling. However, there is a notable lack of similarly comprehensive tests for the other architectures mentioned by the reviewer. We believe our comparisons already provide relevant insights into the effectiveness of ViSNet for practical protein modeling tasks.\\n\\n**References**\\n1. Lindorff-Larsen, K., Piana, S., Dror, R. O., & Shaw, D. E. (2011). How fast-folding proteins fold. *Science*, 334(6055), 517\\u2013520.\\n2. Bonati, L., Piccini, G. M., & Parrinello, M. (2021). Deep learning the slow modes for rare events sampling. *Proceedings of the National Academy of Sciences*, 118(44), e2113533118.\\n3. Zou, Z., Wang, D., & Tiwary, P. (2024). Graph Neural Network-State Predictive Information Bottleneck (GNN-SPIB) approach for learning molecular thermodynamics and kinetics. *arXiv preprint arXiv:2409.11843*.\\n4. Wang, Y., Wang, T., Li, S., He, X., Li, M., Wang, Z., Zheng, N., Shao, B., & Liu, T.-Y. (2024). Enhancing geometric representations for molecules with equivariant vector-scalar interactive message passing. *Nature Communications*, 15(1), 313.\\n5. Pelaez, R. P., Simeon, G., Galvelis, R., Mirarchi, A., Eastman, P., Doerr, S., Th\\u00f6lke, P., Markland, T. E., & De Fabritiis, G. (2024). TorchMD-Net 2.0: Fast Neural Network Potentials for Molecular Simulations. *Journal of Chemical Theory and Computation*.\"}", "{\"title\": \"Response to Reviewer KcDu Part 2\", \"comment\": \"### About including invariant feature-based networks\\n\\nLet\\u2019s reiterate here, we did not focus on a specific GNN architecture nor a specific pre-training technique, and we did not propose a new GNN architecture nor a new pre-training technique in this paper. May the reviewer clarify what we should compare? \\n\\nThough this discussion is far from the focus of the paper, we can briefly talk about different forms of geometric GNNs. Theoretically, O(d)-equivariant functions can be universally expressed by a collection of scalars [5]. And practically the performance of GemNet/Equiformer/ViSNet/MACE/etc. are not that different on various benchmarks. In line with GVP, PaiNN/ET/ViSNet directly track the vector features by parameterizing the displacement vectors, and those features can be used to predict vector quantities. As the reviewer suggested, the expressiveness of invariant-feature based GNNs increases by the order of features used (SchNet: bond length, DimeNet: bond length/bond angle, GemNet: bond length/bond angle/dihedrals), but higher-order features, for instance, dihedrals require enumerating four indices at the same time. ViSNet includes higher-order features by taking the product of vector representations of source and target nodes, resulting in higher efficiency. As invariant networks do not directly track the vector features, we would have to take the derivative of the scalar output with respect to the input coordinates to predict vector values, where we do not see a point to do so. Regarding group-equivariant networks, we do not foresee considerable differences as explained earlier except for more hyperparameters tuning and computational cost. Again, we would like to emphasize the fact that comparing different GNN architectures or pre-training objectives is not the goal of this paper, and adding this comparison could also easily multiplicatively increase the workload for evaluating every pre-training and downstream task. \\n\\n[5] Villar, Soledad, David W. Hogg, Kate Storey-Fisher, Weichi Yao, and Ben Blum-Smith. \\\"Scalars are universal: Equivariant machine learning, structured like classical physics.\\\" Advances in Neural Information Processing Systems (NeurIPS), edited by A. Beygelzimer, Y. Dauphin, P. Liang, and J. Wortman Vaughan, 2021.\"}" ] }
4RRmy9iw3c
AutoAL: Automated Active Learning with Differentiable Query Strategy Search
[ "Yifeng Wang", "Xueying Zhan", "Siyu Huang" ]
As deep learning continues to evolve, the need for data efficiency becomes increasingly important. Considering labeling large datasets is both time-consuming and expensive, active learning (AL) provides a promising solution to this challenge by iteratively selecting the most informative subsets of examples to train deep neural networks, thereby reducing the labeling cost. However, the effectiveness of different AL algorithms can vary significantly across data scenarios, and determining which AL algorithm best fits a given task remains a challenging problem. This work presents the first differentiable AL strategy search method, named AutoAL, which is designed on top of existing AL sampling strategies. AutoAL consists of two neural nets, named SearchNet and FitNet, which are optimized concurrently under a differentiable bi-level optimization framework. For any given task, SearchNet and FitNet are iteratively co-optimized using the labeled data, learning how well a set of candidate AL algorithms perform on that task. With the optimal AL strategies identified, SearchNet selects a small subset from the unlabeled pool for querying their annotations, enabling efficient training of the task model. Experimental results demonstrate that AutoAL consistently achieves superior accuracy compared to all candidate AL algorithms and other selective AL approaches, showcasing its potential for adapting and integrating multiple existing AL methods across diverse tasks and domains.
[ "Active Learning", "Differentiable Bi-level Optimization" ]
Reject
https://openreview.net/pdf?id=4RRmy9iw3c
https://openreview.net/forum?id=4RRmy9iw3c
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zUdsYNEY8z", "xmZPwe10Yz", "rvCdg0gbB6", "oNjkGt2bsI", "kWv7paJ9FX", "hhvfGeFDOR", "e7TRy7WcFL", "crureG4Tdu", "bTyyWiTNLt", "aKVPd0GnIK", "XFHrqVMYh6", "TUmniBAAG8", "Sgc4zURpBw", "Ooczd6pUcr", "OG3RZeMkUj", "NB1JMz4tF7", "JWVSQhWIPs", "99X0FjEpas" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review" ], "note_created": [ 1737523708790, 1732676507925, 1732337714919, 1732338368251, 1732338150122, 1732620230449, 1732338262667, 1730309533705, 1730660827193, 1732338087214, 1732338490431, 1734661904738, 1732338292323, 1732337741300, 1732348009284, 1732496146762, 1731303930545, 1730549923650 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission5471/Reviewer_164p" ], [ "ICLR.cc/2025/Conference/Submission5471/Authors" ], [ "ICLR.cc/2025/Conference/Submission5471/Authors" ], [ "ICLR.cc/2025/Conference/Submission5471/Authors" ], [ "ICLR.cc/2025/Conference/Submission5471/Reviewer_U39s" ], [ "ICLR.cc/2025/Conference/Submission5471/Authors" ], [ "ICLR.cc/2025/Conference/Submission5471/Reviewer_U39s" ], [ "ICLR.cc/2025/Conference/Submission5471/Reviewer_Rqgr" ], [ "ICLR.cc/2025/Conference/Submission5471/Authors" ], [ "ICLR.cc/2025/Conference/Submission5471/Authors" ], [ "ICLR.cc/2025/Conference/Submission5471/Area_Chair_Wx15" ], [ "ICLR.cc/2025/Conference/Submission5471/Authors" ], [ "ICLR.cc/2025/Conference/Submission5471/Authors" ], [ "ICLR.cc/2025/Conference/Submission5471/Reviewer_Rqgr" ], [ "ICLR.cc/2025/Conference/Submission5471/Reviewer_fWbJ" ], [ "ICLR.cc/2025/Conference/Submission5471/Reviewer_fWbJ" ], [ "ICLR.cc/2025/Conference/Submission5471/Reviewer_164p" ] ], "structured_content_str": [ "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"discussion\", \"comment\": \"Thanks for your responses. Still, I think the technical novelty is not as significant as claimed. I decide to maintain my scores.\"}", "{\"title\": \"Rebuttal for Reviewer fWbJ\", \"comment\": \"Thank you for your valuable comments. We appreciate that you confirm our contribution on proposing the first Automatic AL search method with a differentiable bi-level framework, which will make AL automatically suitable to different real-life applications.\\n\\nFor your questions, we make the following comments to clarify our points:\\n**Q1:** Provide examples where the proposed AutoAL approach is particularly advantageous compared to hybrid AL methods that combine uncertainty and diversity\\n\\n **A1:** While hybrid methods have shown relatively good performance in some real-world applications compared to single uncertainty- or diversity-based methods, determining the optimal trade-off between these strategies remains a challenge. This limitation makes hybrid methods less robust across all scenarios because they often rely on fixed heuristics, such as weighted-sum [R2] or multi-stage optimization [R3]. For example, in our revised paper Figure 2, BadgeSampling [R3] can outperform single uncertainty- or diversity-based methods in some cases (e.g., SVHN), but it performs poorly in others, such as PathMNIST.\\n\\n In contrast, AutoAL treats the trade-off between various AL candidates as a learning task. It is the first method capable of automatic selection among candidate AL strategies. Our experimental results demonstrate that AutoAL generalizes well to diverse real-world applications, including both natural and medical image datasets.\\n\\n\\n**Q2:** How does AutoAL handle skewed or imbalanced labeled data, particularly if the initial labeled set suffers from class imbalance? Would this impair its performance, given the assumption of a randomly selected initial set?\\n\\n **A2:** Thank you for raising this insightful question. Our method is not specifically designed to address the class imbalance problem. While, we have conducted experiments on imbalanced datasets, such as the medical datasets (see Table 1 for detailed descriptions of these datasets). Additionally, to ensure AutoAL's robustness in such scenarios, we repeated our trials three times. \\n\\n The experimental results indicate that AutoAL exhibits very low standard deviation, demonstrating that even when the initial selection pool is not carefully balanced, AutoAL consistently selects the best strategy and outperforms other baselines.\\n\\n\\n**Q3:** Does AutoAL guarantee that training with labeled data from the current AL round will identify the most informative samples in the next round? A detailed analysis of its guarantees would be helpful.\\n\\n **A3:** AutoAL samples data uniformly from the pool, ensuring that the data in each cycle follows approximately the same distribution. The experimental results confirm that this approach works well, as AutoAL consistently outperforms baselines in every AL round. Furthermore, AutoAL shows minimal accuracy drops between rounds. Some baseline methods, for instance, MarginSampling on SVHN, suffers a severe accuracy drop in round 4. In addition, this sampling approach is a standard practice in learning-based AL methods [R4] to ensure a good generalization ability across AL rounds.\\n\\n\\n**Q4:** The approach of training an additional network for sample selection shows similarities to [R1] employing meta-learning with an additional network for querying.\\n\\n **A4:** Thank you for mentioning this meta-learning work. We believe AutoAL has significant differences from MQNet:\\n(1) MQNet focuses on training an MLP that takes an open-set score and an AL score as input, outputting a balanced meta-score for sample selection. In contrast, AutoAL trains an active learning selection network to determine the most effective AL strategy based on the labeled dataset;\\n(2) To address efficiency concerns, AutoAL employs a bi-level optimization strategy and differentiable query strategy optimization (see Sec. 3.3), which is not mentioned by MQNet;\\n(3) MQNet is designed for open-set dilemma, whereas AutoAL focuses on datasets where class labels are present in the labeled set.\\n\\tThank you for the helpful suggestion, we have revised our Related Work accordingly.\\n \\n**Q5:** Can the proposed method be applied to the open-set AL problem [R1]?\\n \\n **A5:** As mentioned in **Q4**, AutoAL is not designed to address the open-set problem. However, it can automatically balance uncertainty and diversity within in-distribution (IN) data. But we really thank for the insightful advice and we will consider this as the future work of AutoAL.\\n\\n**Q6:** The datasets used in the experiments are of small scale. It is imperative to validate the performance on large-scale datasets, such as ImageNet.\\n \\n **A6:** We have added experiments on the TinyImageNet dataset (200 classes). Please refer to Figure 2 in the revised paper. Our method outperforms all baselines at every round, demonstrating the scalability of AutoAL.\"}", "{\"title\": \"Rebuttal for Reviewer U39s\", \"comment\": \"We thank for your positive feedback and the confirmation of our proposed AutoAL! We also thank for your valuable question, so we want to clarify the followings:\\n\\n**Q1:** My only concern is the efficiency of the AutoAL algorithm. Although more efficient solutions have been proposed to solve second-order optimization problems, I cannot find any relevant experiments to verify them.\\n\\n **A1:** The method described in Section 3.3 relaxes the search space, enabling AutoAL to efficiently perform gradient descent for updates, thereby improving its overall efficiency.\\n\\nDuring the rebuttal period, we conducted experiments on a relatively large dataset, TinyImageNet. Please refer to Figure 2 in the revised paper. The results demonstrate that AutoAL performs well on large datasets compared to other AL baselines. Additionally, we have analyzed the time cost of AutoAL training. Our AutoAL is efficient, and the primary time consumption part is the AL strategies sampling part but not the AutoAL training itself. The overall time cost of AutoAL is comparable to other baseline methods, such as Ensemble Variance Ratio Learning. Please refer to Section 4.5 for further details.\"}", "{\"title\": \"Qestion 6 and References for the Rebuttal for Reviewer Rqgr\", \"comment\": \"**Q6:** Can AutoAL be extended to generate new active learning strategies dynamically rather than relying solely on a predefined candidate pool?\\n \\n **A6:** AutoAL is not designed to generate new active learning strategies dynamically. Instead, it focuses on integrating any kinds of AL strategies into its candidate pool and leveraging them effectively. We believe this approach allows AutoAL to take full advantage of existing advanced strategies while maintaining flexibility and extensibility.\", \"reference\": \"[R5] Mussmann, Stephen O., and Sanjoy Dasgupta. \\\"Constants matter: The performance gains of active learning.\\\" International Conference on Machine Learning. PMLR, 2022.\\n\\n[R6] Huang, Siyu, et al. \\\"Semi-supervised active learning with temporal output discrepancy.\\\" Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021.\\n\\n[R7] Hacohen, Guy, and Daphna Weinshall. \\\"How to select which active learning strategy is best suited for your specific problem and budget.\\\" Advances in Neural Information Processing Systems 36 (2024).\\n\\n[R8] Zhang, Jifan, et al. \\\"Algorithm selection for deep active learning with imbalanced datasets.\\\" Advances in Neural Information Processing Systems 36 (2024).\"}", "{\"comment\": \"Thanks for your detailed response. I maintain the original score of 6.\"}", "{\"title\": \"Rebuttal for Reviewer 164p\", \"comment\": \"Thank you for your valuable feedback. We appreciate that you confirm our contribution on solving the strategy selection problem and our rational method: using differentiable bi-level framework.\\n\\nFor the weakness, we make the following comments to clarify our points:\\n\\n**Q1:** There is still room for improvement in the paper writing.\\n 1.1 It's unnecessary to name the two networks \\\"fitnet\\\" and \\\"searchnet,\\\" as it seems intended to make people think this is a significant innovation. However, in meta-learning, this kind of separated network design and bi-level optimization paradigm is very common.\\n\\n 2.1. The notations are somewhat confusing. For example, the authors didn't clearly define the output of the search net in Section 3.2, making it hard to understand Sec 3.2. It wasn\\u2019t until I finished reading the method section that I realized the output is actually a sample-wise score, forming an aggregation of scores for different queries.\\n\\n **A1:** Thank you for raising these concerns. First, we named the networks \\\"FitNet\\\" and \\\"SearchNet\\\" because we believed these names reflected their tasks: \\\"FitNet\\\" models the data distribution to make AutoAL fit the data, while \\\"SearchNet\\\" searches the best AL strategy in the candidate pool. Thank you for the comment, to align with the net name used in previous AL works [R11], we will rename \\\"FitNet\\\" to \\\"TaskNet\\\" in the final version.\\n\\n Second, we apologize for the confusion caused by unclear notations. We have revised the paper to define the outputs of both SearchNet and FitNet explicitly at Section 3.2. Additionally, we will update our main figure to clarify these details for readers.\\n\\n**Q2:** The novelty of this paper is relatively limited. The proposed meta-learning/bi-level optimization has been applied to AL [R9,R10]. Also, I think the algorithm design is too complicated.\\n\\n **A2:** Thank you for referencing these materials. However, we believe the works you mentioned differ significantly from our AutoAL:\\n (1) In [R9], the focus is not on algorithm selection but on treating AL as a meta-learning task using reinforcement learning to predict the next best data point. Additionally, their network is not differentiable, and their use of a reward function results in high computational costs.\\n (2) In [R10], the meta-gradients are computed with respect to perturbations added to labels, rather than through differentiation with model parameters. Furthermore, their approach uses meta-learning as an uncertainty-based active sampler, while AutoAL is designed to select the best strategy from multiple AL candidates. Lastly, their work is targeted at semi-supervised node classification, not image recognition.\", \"to_clarify_our_contribution\": \"Usually a single AL method cannot perform well across all real-world applications. AutoAL focuses on selecting the best strategy from existing AL methods. Moreover, our network is differentiable, enabling efficient AL strategy selection through gradient descent.\\n\\n**Q3:** The motivation for modeling the scores by GMM distributions is unclear. Why is the score function of each strategy distributed as a Gaussian Distribution? Why is the final score function a linear weighted aggregation of different strategies? The authors should provide a concrete application or example.\\n\\n **A3:** Thank you for raising these insightful questions. First, in AL settings, the strategy will only query a subset of the data. As defined in Section 3.3, the ratio t (batch size b divided by the total pool size M+N) determines the number of data points to query. However, it is difficult to directly identify the top-t data points. To address this, we use a Gaussian Mixture Model (GMM) to model AL scores, setting the t-th best value as the sigmoid zero point. This approach relaxes the search space and effectively supervises the updates to SearchNet using the loss function in Section 3.4.\\n\\n Second, the linear aggregation of scores is a design choice tailored to our method. For each image, if a candidate strategy selects it as one of the top data points, it receives a high score (1 in our case); otherwise, it receives a low score (0). The final score for each image is a linear combination of strategy-specific scores weighted by the image's self-loss (computed by FitNet). This design balances the contributions of different strategies and enables effective modeling of final image importance.\\n \\n Thank you for the comments, we will add more details on method design in the final paper.\"}", "{\"summary\": \"This paper attempts to tackle the \\\"generalization problem\\\" of active learning (AL) algorithms across data scenarios. I believe this is a core issue in the current active learning field. This paper proposes AutoAL, a differentiable AL strategy search method to select the most effective AL sampling strategies in each iteration. It consists of two neural nets, named SearchNet and FitNet, which are optimized concurrently under a differentiable bi-level optimization framework. The experiments on multiple datasets validate the effectiveness of the proposed approach.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The problem studied in this paper is valuable. This paper presents the first differentiable AL strategy search method.\\n2. The proposed AutoAL approach is interesting and easily followed.\\n3. The paper is well organized. \\n4. The experiments validate the effectiveness of the proposed approach, and the ablation study in Figure 5 is insightful.\", \"weaknesses\": \"1. My only concern is the efficiency of the AutoAL algorithm. Although more efficient solutions have been proposed to solve second-order optimization problems, I cannot find any relevant experiments to verify them.\", \"questions\": \"See the Weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"1. My only concern is the efficiency of the AutoAL algorithm. Although more efficient solutions have been proposed to solve second-order optimization problems, I cannot find any relevant experiments to verify them.\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper presents AutoAL, a framework for automated active learning that optimizes query strategy selection using differentiable methods. Traditional active learning approaches often rely on predefined strategies like uncertainty sampling or diversity sampling, which may not perform optimally across different datasets or tasks. AutoAL addresses this limitation by integrating existing active learning strategies into a unified framework. It employs two neural networks, SearchNet and FitNet, within a bi-level optimization structure to automate the selection process. By relaxing the discrete search space of active learning strategies into a continuous domain, AutoAL enables gradient-based optimization, enhancing computational efficiency and adaptability. Experimental results demonstrate that AutoAL consistently outperforms individual strategies and other selective methods across various natural and medical image datasets, highlighting its effectiveness and versatility.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"A new approach that automates active learning strategy selection through differentiable optimization, surpassing manual and non-differentiable methods.\", \"Effective integration of strategy selection and data modeling via the bi-level optimization of SearchNet and FitNet.\", \"Flexibility and adaptability, allowing incorporation of multiple existing strategies and tailoring to specific tasks and data distributions.\"], \"weaknesses\": [\"Increased complexity and computational overhead due to the additional neural networks and bi-level optimization, potentially challenging scalability on large datasets.\", \"Dependence on a predefined pool of candidate strategies, which may limit performance if optimal strategies are not included.\", \"Lack of in-depth theoretical analysis explaining the method's effectiveness and the conditions under which it performs best, possibly affecting generalizability.\"], \"questions\": [\"How does AutoAL perform in terms of computational efficiency compared to traditional methods on large-scale datasets?\", \"What mechanisms ensure robustness against convergence issues and local minima in the bi-level optimization?\", \"Can AutoAL be extended to generate new active learning strategies dynamically rather than relying solely on a predefined candidate pool?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"NA\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Rebuttal for Reviewer Rqgr\", \"comment\": \"Thank you for your valuable feedback. We appreciate that you confirm our contribution on proposing the first Automatic AL search method with differentiable bi-level framework, which surpasses manual and non-differentiable AL methods.\\n\\nFor your concerns, we make the following comments to clarify our points:\\n**Q1:** Increased complexity and computational overhead due to the additional neural networks and bi-level optimization, potentially challenging scalability on large datasets.\\n\\n **A1:** We agree that scaling bi-level optimization to large datasets presents efficiency challenges. To address this, the core of our work focuses on designing a differentiable query strategy optimization to significantly reduce computational complexity.\\n\\nEspecially, we have validated AutoAL on two large-scale datasets: TissueMNIST (before rebuttal) with *165,466* images, and TinyImageNet (after rebuttal) with *100,000* images. Our results demonstrate that AutoAL surpasses other baseline models. Additionally, we have included an analysis of computational cost (in terms of time) in the revised paper (see Table 2). The time cost of our AutoAL is compatable to other baselines, such as Ensemble Variance Ratio Learning. We believe these results demonstrate that AutoAL is scalable and generalizable to large datasets.\\n\\n\\n**Q2:** Dependence on a predefined pool of candidate strategies, which may limit performance if optimal strategies are not included.\\n\\n **A2:** AutoAL can easily integrate additional AL strategies into the candidate pool. However, even if the optimal AL strategy is not included, our results in Figure 4 show that simply enlarging the candidate pool does not always improve performance. This suggests that AutoAL can still select a relatively effective uncertainty-based or diversity-based strategy from the pool and achieve good performance, even in the absence of the theoretically best strategy.\\n\\n**Q3:** Lack of in-depth theoretical analysis explaining the method's effectiveness and the conditions under which it performs best, possibly affecting generalizability.\\n\\n **A3:** [R5] perform comprehensive theoretical analysis for active learning. As discussed in [R5], AL often performs vary on different problem settings. AutoAL seeks to address this limitation by generalizing to different datasets automatically.\\n\\nWe designed two networks\\u2014SearchNet and FitNet\\u2014within a bi-level framework. FitNet uses gradient descent to adapt to the labeled data distribution, simulating the final classification model. SearchNet is updated under FitNet's supervision and identifies the most informative data points. This process is supported by the common practice of selecting samples with maximum loss, as discussed in [R6].\\n\\nFrom our experiments, AutoAL has consistently outperformed baselines across various datasets, including TinyImageNet (during rebuttal) and additional medical image classification tasks. These datasets are widely used in prior research [R7,R8]. Furthermore, Table 2 in the revised paper highlights AutoAL's efficiency and generalizability. Please refer to Section 4.5 for further details.\\n\\n\\nFor the questions, we want to clarify that:\\n**Q4:** How does AutoAL perform in terms of computational efficiency compared to traditional methods on large-scale datasets?\\n \\n **A4:** We have added the differentiable query strategy optimization (Section 3.3) to relax the search space, which reduces computational complexity. The computational cost analysis included in Table 2 of the revised paper shows that AutoAL's time cost is comparable to methods like ENSvarR and VAAL. Most of the time is spent on the computation of scores of different AL strategies, while the AutoAL network update incurs trivial additional cost. Additionally, the time ratio relative to EntropySampling remains stable even on large-scale datasets, demonstrating that AutoAL scales effectively.\\n\\n**Q5:** What mechanisms ensure robustness against convergence issues and local minima in the bi-level optimization?\\n \\n **A5:** Several mechanisms ensure robustness in our framework:\\n\\n (1) FitNet: Convergence is straightforward as FitNet minimizes the loss over the labeled data pool, aligning with the data distribution.\\n (2) SearchNet: We relax the search space, enabling gradient ascent to supervise SearchNet's updates effectively and facilitate convergence.\\n (3) AL character: SearchNet and FitNet are retrained during each AL round using updated data distributions, which allows SearchNet to avoid local minima and converge toward a global solution.\"}", "{\"title\": \"For All the Reviewers, PCs, ACs\", \"comment\": \"We apologize for the delayed response. We sincerely thank all the reviewers for their valuable feedback and the time dedicated to improving our work. We have carefully reviewed all comments and made the following changes to our paper during the rebuttal period:\\n\\n1. We added **four baselines**: Coreset [ICLR 2018], Ensemble Variance Ratio Learning [CVPR 2018], Variational Adversarial Active Learning (VAAL) [ICCV 2019], and Deep Deterministic Uncertainty (DDU) [CVPR 2023], along with the **TinyImageNet dataset** to verify the generalizability of AutoAL. These new results further demonstrate that AutoAL outperforms all baselines across different settings. Please refer to the updated Figure 2 and Section 4 in the revised paper for more details.\\n2. We included a comprehensive **method efficiency analysis** of AutoAL and other baselines on different datasets in Table 2. The results indicate that the complexity and time cost of AutoAL are comparable to other baselines, such as Ensemble Variance Ratio Learning. Moreover, we found that the most time-consuming component is AL strategy sampling, not the AutoAL update process. This suggests that even with bi-level optimization, the relaxed search space and differentiable design of AutoAL ensure that it does not significantly increase time cost.\\n3. We clarified the differences between our work and meta-learning in the revised Related Work section.\\n\\nAdditionally, we would like to highlight the contribution of this work again:\\n1. AutoAL is specifically designed for automatic active learning strategy selection in diverse real-world settings. Our bi-level optimization framework allows integration of *any existing AL methods* into the candidate pool, enabling the selection of the most effective strategy for a given task.\\n2. AutoAL is *the first differentiable AL strategy selection method*. Key innovations, such as the probabilistic query strategy and differentiable acquisition function optimization, effectively reduce the complexity of bi-level optimization, making the time cost of AutoAL comparable to traditional AL strategies.\\n3. Our experimental results demonstrate that AutoAL achieves state-of-the-art performance across all datasets, including the TinyImageNet dataset. These findings highlight the generalizability of our proposed method in real-world applications.\\n\\nWe hope you could spend some time to review our revised paper and our responses to your questions. If you have any further questions, we would be more than happy to provide additional clarifications by the rebuttal deadline date.\"}", "{\"metareview\": \"This work introduces AutoAL, the first differentiable active learning strategy search method, which uses a bi-level optimization framework to adaptively identify optimal AL strategies, achieving superior accuracy and efficiency across diverse tasks and domains.\\n\\nThere reviewers reviewed this paper. All of them agree that this paper is not yet ready for publication at this stage. We recommend the authors go through all the reviews and address it in the next version.\", \"additional_comments_on_reviewer_discussion\": \"There reviewers reviewed this paper. All of them agree that this paper is not yet ready for publication at this stage. We recommend the authors go through all the reviews and address it in the next version.\"}", "{\"title\": \"Qestion 4 and References for the Rebuttal for Reviewer 164p\", \"comment\": \"**Q4:** The comparison methods are too outdated, with the latest ones being LPL and BADGE from 2019. Additionally, the datasets are quite limited; despite the complexity of the method design, only CIFAR and MNIST datasets were used. Validation should be conducted on the ImageNet dataset (at least Image100). Otherwise, given that the algorithm design is much more complex than the baselines, its effectiveness cannot be convincingly demonstrated.\\n\\n **A4:** Thank you for the helpful suggestion. We have added TinyImageNet dataset, and, four baselines for comparison: Coreset [ICLR\\u20192018], Ensemble Variance Ratio Learning [CVPR\\u20192018], Variational Adversarial Active Learning [ICCV\\u20192019], Deep Deterministic Uncertainty [CVPR\\u20192023]. The results in Figure 2 demonstrate that our method outperforms all baselines across every dataset, including the newly added TinyImageNet.\\n\\nTable 1 summarizes the datasets used in this work. We also want to clarify that MedMNIST is not the written number as MNIST, it's a large-scale collection of standardized biomedical images. Such as TissueMNIST, it contains Kidney Cortex Microscope images with total 165466 images as training set.\", \"reference\": \"[R9]Pang, Kunkun, et al. \\\"Meta-learning transferable active learning policies by deep reinforcement learning.\\\" arXiv preprint arXiv:1806.04798 (2018).\\n\\n[R10] Madhawa, Kaushalya, and Tsuyoshi Murata. \\\"Active learning on graphs via meta learning.\\\" ICML Workshop on Graph Representation Learning and Beyond, ICML. 2020.\\n\\n[R11] Huang, Siyu, et al. \\\"Semi-supervised active learning with temporal output discrepancy.\\\" Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021.\"}", "{\"title\": \"References for the Rebuttal for Reviewer fWbJ\", \"comment\": \"References:\\n[R1] Park, Dongmin, et al. \\\"Meta-query-net: Resolving purity-informativeness dilemma in open-set active learning.\\\" Advances in Neural Information Processing Systems 35 (2022): 31416-31429.\\n\\n[R2] Yin, Changchang, et al. \\\"Deep similarity-based batch mode active learning with exploration-exploitation.\\\" 2017 IEEE international conference on data mining (ICDM). IEEE, 2017.\\n\\n[R3] Ash, Jordan T., et al. \\\"Deep Batch Active Learning by Diverse, Uncertain Gradient Lower Bounds.\\\" International Conference on Learning Representations (ICLR). 2020.\\n\\n[R4] Clarysse, Jacob, and Fanny Yang. \\\"Uniform versus uncertainty sampling: When being active is less efficient than staying passive.\\\"\"}", "{\"comment\": \"I have carefully reviewed the authors' responses. While I fully agree with the motivation and problem statement of AutoAL, I still find the theoretical foundation for the purpose of the bi-level optimizer to be insufficient. The idea of minimizing loss has already been explored in the existing literature, and studies addressing issues like the scaling overhead required for diverse pool selection still seem lacking. I appreciate the effort put into preparing the rebuttal, but I will maintain my original score.\"}", "{\"comment\": \"Thanks for the clarification. I have read all the responses. I would like to keep my original score.\"}", "{\"summary\": \"This work introduces AutoAL, a differentiable active learning (AL) strategy search method that builds on existing AL sampling strategies. AutoAL contains two neural networks, SearchNet and FitNet, which are co-optimized through a differentiable bi-level optimization framework to identify optimal AL strategies for different tasks. Experimental results show that AutoAL outperforms individual AL algorithms and other selective approaches.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The work handles an important ML problem; Active Learning (AL) with differentiable strategy search.\", \"The proposed method is technically sound\", \"Writing is clear and easy-to-follow\"], \"weaknesses\": [\"Hybrid AL methods that combine uncertainty and diversity have been demonstrated to perform effectively in a variety of situations. It would be beneficial to include examples where the proposed AutoAL approach is particularly necessary or advantageous for specific applications.\", \"The bi-level optimization within AutoAL relies on labeled data. How does the algorithm perform if the labeled data is skewed or imbalanced? For instance, if the initial labeled set suffers from class imbalance, might this severely impair the algorithm? The assumption of a randomly selected initial set, as used in the current experiments, appears to be less practical.\", \"Similarly, is there a guarantee that the AutoAL approach, trained with labeled data from the current AL round, will identify the most informative samples from the unlabeled pool in the subsequent AL round? A more detailed analysis of the algorithm's guarantees is necessary.\", \"The approach of training an additional network for sample selection shows similarities to [1] employing meta-learning with an additional network for querying.\", \"Can the proposed method be applied to the open-set AL problem [1]?\", \"The datasets used in the experiments are of small scale. It is imperative to validate the performance on large-scale datasets, such as ImageNet.\", \"---\", \"[1] Meta-Query-Net: Resolving Purity-Informativeness Dilemma in Open-set Active Learning, NeurIPS, 2022\"], \"questions\": \"See weaknesses\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes an active strategy query algorithm where the optimal query strategy is selected by a bi-level optimization network. In particular, the authors aggregates the query strategies by a scoring function implemented as a Gaussian Mixture Model. Then, the authors split out a validation set from the labeled samples to guide scoring function calculation. Experimental results show that the proposed method supasses the baselines.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The studied strategy selection problem for active learning is important.\\n2. The bi-level optimization strategy is rational.\", \"weaknesses\": \"1. There is still room for improvement in the paper writing.\\n\\n 1.1. It's unnecessary to name the two networks \\\"fitnet\\\" and \\\"searchnet,\\\" as it seems intended to make people think this is a significant innovation. However, in meta-learning, this kind of separated network design and bi-level optimization paradigm is very common.\\n\\n 2.1. The notations are somewhat confusing. For example, the authors didn't clearly define the output of the search net in Section 3.2, making it hard to understand Sec 3.2. It wasn\\u2019t until I finished reading the method section that I realized the output is actually a sample-wise score, forming an aggregation of scores for different queries.\\n\\n2. The novelty of this paper is relatively limited. The proposed meta-learning/bi-level optimization has been applied to AL [1,2]. Also, I think the algorithm design is too complicated.\\n\\n3. The motivation for modeling the scores by GMM distributions is unclear. Why is the score function of each strategy distributed as a Gaussian Distribution? Why is the final score function a linear weighted aggregation of different strategies? The authors should provide a concrete application or example.\\n\\n4. The comparison methods are too outdated, with the latest ones being LPL and BADGE from 2019. Additionally, the datasets are quite limited; despite the complexity of the method design, only CIFAR and MNIST datasets were used. Validation should be conducted on the ImageNet dataset (at least Image100). Otherwise, given that the algorithm design is much more complex than the baselines, its effectiveness cannot be convincingly demonstrated.\\n\\n[1] Kunkun Pang, Mingzhi Dong, Yang Wu, and Timothy Hospedales. 2018. Meta-learning transferable active learning policies by deep reinforcement learning. International Workshop on Automatic Machine Learning (ICML AutoML 2018).\\n\\n[2] https://grlplus.github.io/papers/96.pdf\", \"questions\": \"see above\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"1\", \"code_of_conduct\": \"Yes\"}" ] }
4RHdGVimNA
StagFormer: A Staggered Transformer for Decoding Layers in Parallel
[ "Dylan J Cutler", "Arun Kandoor", "Nishanth Dikkala", "Xin Wang", "Nikunj Saunshi", "Rina Panigrahy" ]
Standard decoding in a Transformer based language model is inherently sequential as we wait for a token’s embedding to pass through all the layers in the network before starting the generation of the next token. In this work, we propose anew architecture StagFormer (Staggered Transformer), which staggered execution along the time axis and thereby enables parallelizing the decoding process along the depth of the model. We achieve this by breaking the dependency of the token representation at time step $i$ in layer $l$ upon the representations of tokens until time step $i$ from layer $l−1$. Instead, we stagger the execution and only allow a dependency on token representations until time step $i−1$. The later sections of the Transformer still get access to the ”rich” representations from the prior section but only from those token positions which are one time step behind. StagFormer allows for different sections of the model to be executed in parallel yielding up to 33% speedup in decoding while being quality neutral. We also explore many natural variants of this idea. We present how weight-sharing across the different sections being staggered can be more practical in settings with limited memory. We show how one can approximate a recurrent model during inference using such weight-sharing. We explore the efficacy of using a bounded window attention to pass information from one section to another which helps drive further latency gains for some applications. We also explore demonstrate the scalability of the staggering idea over more than 2 sections of the Transformer.
[ "decoder only language models", "transformers", "staggered execution", "pipelining", "parallel decoding", "efficiency" ]
Reject
https://openreview.net/pdf?id=4RHdGVimNA
https://openreview.net/forum?id=4RHdGVimNA
ICLR.cc/2025/Conference
2025
{ "note_id": [ "qc5UGFmSL7", "ldF03TjA0d", "ekP3ew14TH", "eHBPzsA6g4", "avnxbUWAzI", "VGBH5jDHMJ", "VEChV6VheC", "RhrRZWYkaO", "LYTFw6ff20", "FDRlKJmfsC", "EOPsl35pjx", "8ewj756KIB", "64Feb2oOTV" ], "note_type": [ "official_review", "official_comment", "official_review", "decision", "official_comment", "official_comment", "meta_review", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1730595831791, 1732772232058, 1730651863158, 1737523652534, 1732762031046, 1732801900312, 1734909234332, 1732727798968, 1730641449042, 1730810878741, 1732762919536, 1732854699343, 1732761737419 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission4637/Reviewer_Apqd" ], [ "ICLR.cc/2025/Conference/Submission4637/Reviewer_oS5P" ], [ "ICLR.cc/2025/Conference/Submission4637/Reviewer_oS5P" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission4637/Authors" ], [ "ICLR.cc/2025/Conference/Submission4637/Reviewer_Apqd" ], [ "ICLR.cc/2025/Conference/Submission4637/Area_Chair_94ro" ], [ "ICLR.cc/2025/Conference/Submission4637/Authors" ], [ "ICLR.cc/2025/Conference/Submission4637/Reviewer_TMDV" ], [ "ICLR.cc/2025/Conference/Submission4637/Reviewer_vLV7" ], [ "ICLR.cc/2025/Conference/Submission4637/Authors" ], [ "ICLR.cc/2025/Conference/Submission4637/Reviewer_vLV7" ], [ "ICLR.cc/2025/Conference/Submission4637/Authors" ] ], "structured_content_str": [ "{\"summary\": \"This paper proposes a novel transformer architecture that effectively reduces the number of sequential steps (layers) during the decoding process by staggering the computation across different time-steps. This allows for improved parallelism during decoding individual sequences, providing speedups during inference.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"staggered computation leads to significant improvements in per-time-step decoding speeds while slightly improving performance\", \"provides results and analysis of different variants of staggered transformers that further explores the architecture's efficacy\"], \"weaknesses\": [\"Biggest critique is that it lacks comparative analysis of staggering computation vs. simply increasing the width of the model and lowering the number of layers, as this increases per layer parallelism while decreasing the number of layers leading to a similar improvement in decoding speed.\", \"This technique is possibly only useful for speeding up decoding when only a single sequence is being decoded. A non-staggered model could in theory process twice the batch size as it has half the parallelism (and hence half the per layer memory requirement) of a model staggered with p=2.\", \"StagFormer is possibly slower to train (as inferred by its slower pre-filling speed)\", \"Paper could be further refined (minor critique):\", \"Some references are configured incorrectly (Table ?? in page 5, \\\"TODO\\\" in page 8)\", \"Plots have unnecessary information (Figure 4 doesn't need texts like /1/summarize/train)\"], \"questions\": \"Addressing the weaknesses outlined above would improve the paper.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": [\"Thank you for addressing my concerns.\", \"**Memory bottlenecks during decoding**: Thank you for the clarification. However, the paper itself does not mention parallelization across multiple chips, except for the fact that 16 TPU chips were used for measurements. If this parallelization is the main advantage of StagFormers, it should be discussed in relation to previous methods. Relevant topics include: challenges with multi-device parallelization of transformers; existing techniques such as tensor parallelism; detailed explanation of how model parameters and KV cache of StagFormers are parallelized across devices and how this outperforms existing methods in mitigating key costs (compute, memory, and/or communication)\\u2013at least in principle; comparison with vanilla transformers using existing parallelism strategies. In the current experiments, it is not clear what parallelism strategies were used for each model.\", \"`instead of operating a size B Transformer model in parallel on C accelerator chips, we propose serving a size B (approximately) StagFormer model on 2C chips`: how much better/faster is StagFormer compared to using 2C chips with existing parallelism techniques on vanilla transformers?\", \"**Misleading task performance of the Recurrent variant**: I think it is misleading to attribute these scores to the recurrent decoding variant, as it does not represent the performance of the model in generation (while models of this size and training length are typically not used for generation, I think the expectation is that these scores are used to gauge the potential of the architecture when they are eventually scaled up). Only evaluations that use decode-stage outputs should be attributed to the recurrent decoding variant.\", \"**Architecture details**: Thank you for the clarification.\"]}", "{\"summary\": \"The authors present the Staggered Transformer (StagFormer) and its variants which relieve sequential dependancies in the decoding pipeline to enable higher levels of parallel execution.\\n\\nConsider a transformer with two stacks of layers, A (bottom half) and B (upper half). In vanilla transformers, the input token embedding is passed to stack A. Then, the output of stack A is passed to stack B. All layers apply self-attention on outputs of the previous layer.\\n\\nIn the baseline StagFormer (`Separate-Weights`), stack A is the same. However, stack B takes in the input token embedding rather than the output of stack A.\\nTo supplement this, stack B applies cross-attention on the final outputs of stack A, up until the previous token. In other words, stack B cross-attends to the outputs of *all previous input tokens* from stack A, instead of directly inputting that of the *current* input token. This relieves the dependency of stack B on stack A, within a single decoding step, thus both A and B can be computed simultaneously.\", \"the_authors_investigate_many_variants_of_this_design\": \"1. `Shared-Weights`: this is where stack A and stack B share the same model parameters (excluding the cross-attention layers which are unique to stack B).\\n2. `Recurrent, Shared-Weights`: this is a unique decoding method for the `Shared-Weights` trained model. In `Shared-Weights` stack A and B are identical, except that stack B applies cross-attention to outputs from stack A. Essentially, the shared stack S (= A = B) is first forwarded without cross-attention, and then forwarded a second time *with* cross-attention, attending to outputs from the first forward pass. The `Recurrent` setting refers to that where the first forward pass is skipped, and cross-attention in the second pass attends to outputs of the \\\"second\\\" pass from the previous decoding step.\\n3. `p > 2`: this is where more than two stacks are considered.\\n\\nWhen compared to vanilla transformers pretrained from scratch, StagFormers show various advantages, mainly:\\n- `Shared-Weights 2x18L`: StagFormer outperforms the vanilla 18L baseline (with roughly same parameters) in both perplexity and average task performance. Using recurrent decoding (roughly matching 18L baseline computation), average task performance lies between the two. StagFormer underperforms the vanilla 36L baseline with roughly same computation in perplexity, but performs comparably on tasks.\\n- `Separate-Weights 2x18L`: StagFormer outperforms the vanilla 36L baseline (with roughly same parameters and compute) in both perplexity and task performance.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The idea and architecture design are very novel\\n1. The authors propose numerous variants which showcase the potential extension of the idea across various axes\\u2013parallel execution, weight sharing, recurrent computation.\\n1. The architecture shows clear advantages over vanilla transformers across its variants\\n1. The writing is easy to follow and visual depiction of the architecture and its variants are superb.\", \"weaknesses\": \"1. **Memory bottlenecks during decoding may hinder benefits of parallel execution, which is not discussed**: LM decoding is typically bottlenecked by memory rather than compute (see references below). When batch size x context length is small, memory access is dominated by model parameter access. Otherwise, memory access is dominated by KV cache access. While StagFormer can conceptually *parallelize* execution of layers, the associated memory access load cannot be parallelized. In fact, the cross-attention layer will add additional KV cache access overhead. These are critical to assessing the actual wallclock time benefits of decoding with StagFormers compared to vanilla transformers, but is not discussed.\\n 1. Different variants of StagFormers will have different memory bottlenecks. Examples:\\n 1. All variants: cross-attention is added in half of layers. Therefore, the overall KV cache access overhead will increase by 50% (relative to that of self-attention, used in all layers). This will have a larger effect on decoding time as batch size x sequence length becomes large.\\n 1. `Separate-Weights`: both stacks can be executed in parallel, but the memory load is identical as the parameters of both stacks must be retrieved from memory. This means that wall-clock time should typically be identical to vanilla transformers, as decoding is bottlenecked by memory access. `Shared-Weights` can solve this issue.\\n 1. **It is unclear which StagFormer variant is used in Table 2, raising questions on the performance vs latency comparison**: While Table 2 states that a \\\"comparable quality StagFormer\\\" is 33% faster than baseline transformer during decoding, the exact variant is unclear. Given the reasons above, it seems likely that this is the `Shared-Weights 2x18L` variant. While its average task performance is comparable to baseline 36L, its PPL is in the middle of that between vanilla 18L and 36L. It would be misleading to describe this variant as \\\"comparable quality\\\" to vanilla 36L.\\n 1. **Missing comparison of performance vs latency across model variants**: Expanding on the point above, a comparison of prefill/decode time across model variants will provide a clear picture on the performance vs latency benefits of each model variant. This could take the form of a single table that lists the PPL, task performance, and prefill/decode time for each model. In the case of `p > 2, Shared-Weight` variants, I believe this may actually reveal some advantages in terms of latency.\\n 1. **The additional KV cache overhead of cross attention may slow down decoding for longer contexts**: Since KV cache overhead is quadratic to context length, the decode time advantages as shown in Table 2 will likely diminish with longer contexts, especially in batch decoding. Given the relatively short context length of 1024 tokens considered in this study, compared to modern LLMs with 8K+ context, measurement on longer contexts and larger batch sizes can help gauge the potential of the architecture.\\n1. **Misleading task performance of `Recurrent` variant**: In Table 3 (for example), the performance of various tasks are identical between the `Shared-Weights 18L` model and its `Recurrent` counterpart. This is likely because the tasks are measured in a teacher-forcing setting, where the outputs of the prefill stage are used for evaluation. This does not represent the task performance of the `Recurrent` setting, as recurrence is only applied to decoding, as explained in Section 3.2.\\n1. **Experimental results on model variants are hard to follow**: The organization of the results section could be improved to make the comparison between different model variants more clear.\\n 1. Within tables, variations could be better indicated with separate columns, task names could be shortened for space, latency metrics could be included, etc.\\n 1. Results on different variants are presented in multiple tables without a clear organization.\\n1. **Incomplete writing**: \\\"(TODO)\\\" in Line 385, the reference error \\\"??\\\" in Line 267, and numerous typos suggest that this is an incomplete manuscript that is not ready for review.\\n\\nReferences on memory bottlenecks during inference\\n- [Efficiently Scaling Transformer Inference](https://arxiv.org/abs/2211.05102)\\n- [LLM Inference Unveiled: Survey and Roofline Model Insights](https://arxiv.org/abs/2402.16363v4)\\n- [Taming Throughput-Latency Tradeoff in LLM Inference with Sarathi-Serve](https://arxiv.org/abs/2403.02310)\\n- [Block Transformer: Global-to-Local Language Modeling for Fast Inference](https://arxiv.org/abs/2406.02657)\", \"questions\": \"1. Can you describe the architecture shape (vocab size, qkv heads, embedding dimensions) and its justification? The vocab size of 256K is quite high for models of this size.\\n1. In Lines ~499-501, the authors mention that cross-attention is linear to input length instead of quadratic with window size 1. Isn't it linear with any fixed window size? Considering that the cost of attention mainly stems from KV cache IO during decoding, I think the constant factor with a window size as small as 128 makes the cost of cross-attention negligible compared to self-attention (especially when expanding to modern context lengths of 8K or more).\\n 1. However, the *increase* in performance when going from full cross-attention (1024) to windowed attention with window size 512 and 128 is strange. Can the authors justify this increase in performance?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Response to Reviewer Apqd\", \"comment\": [\"**Comparative analysis vs simply increasing width of the model**: This is an interesting question. However, there is empirical evidence building every day that depth is crucial for reasoning oriented downstream tasks. Although a wider shallower model might achieve a similar log pplx during training it tends to be less performant on downstream tasks compared to deeper models.\", \"**Technique only useful when a single sequence is being decoded**: We would like to emphasize that this is false. Our architecture naturally extends to handle processing for a batch of sequences at once.\", \"**Slower to train**: Indeed, this is the case. However, in many applications today, it is perfectly acceptable to have a model that is slower to train but more performant during inference since the training cost is a one-time cost. For instance, take the popular Llama 3 series of models wherein the smallest models were trained for much longer than is suggested by data-scaling laws with the intention that they will be used repeatedly for inference so a one-time higher training cost is easily justified.\", \"**Writing typos/ polishing of the paper**: We apologize for the typos and missing references. We will fix all of them and also polish the presentation in the paper to make it more detailed and include better explanations and insights from the results presented in our work.\"]}", "{\"comment\": \"Thank you for the response, I would like the authors to clarify a few follow up points:\\n\\n> there is empirical evidence building every day that depth is crucial for reasoning oriented downstream tasks. Although a wider shallower model might achieve a similar log pplx during training it tends to be less performant on downstream tasks compared to deeper models.\\n\\nIf there is clear evidence from prior works, can the authors point to specific studies that substantiates these claims and include them in the paper?\\n\\n> We would like to emphasize that this is false. Our architecture naturally extends to handle processing for a batch of sequences at once.\\n\\nThe point of the original question wasn't about whether the model can handle a larger batch size. But rather, given limited resources, a model that has greater degrees of parallelism (stagformer) will require more memory. Therefore, a non-staggered model can potentially process twice as many sequences in parallel as a staggered model with P=2 (twice the parallelism). For this reason, stagformer seems to only improve generation speed when there is an expected fixed batch size. I recommend the authors to clarify this, and simultaneously mention that for edge applications, batch size is typically fixed to 1. I believe this would improve the way the paper is presented.\"}", "{\"metareview\": \"The authors proposed an interesting idea that can possibly result in throughput gains via staggered computation and parameter sharing. However, the manuscript is far from being polished, with missing references, ambiguous variant comparisons, and inadequate consideration of memory overhead that compromises parallel decoding gains. The unclear performance vs. latency tradeoffs, along with limited exploration of longer contexts and direct comparisons to other efficient approaches, further weaken the conclusions. Overall, I think the idea seems interesting, but a much more thorough study is needed. Also, the downsides of the proposed approach (reduced depth and how it affects reasoning, etc.) needs to be carefully studied.\", \"additional_comments_on_reviewer_discussion\": \"While the authors addressed some minor concerns, the reviewers' major concerns remained after the rebuttal.\"}", "{\"title\": \"Response to reviewer vLV7\", \"comment\": [\"We would like to thank the reviewer for their careful reading of our paper and for the feedback provided. We try to address the concerns raised below.\", \"**Typos/Paper organization and polishing**: We apologize for the typos and missing references. We will fix them. We will also take a pass to make the presentation of our main results cleaner and more detailed and polished. Can you elaborate on what you mean by \\u201cfew results for proof of concept\\u201d?\", \"**Table 3: Shared Weights Stagformer (1.6B) outperforming 2.8B Baseline on downstream evaluations**: While it appears that the shared weights stagformer outperforms the 2.8B baseline model on some evaluations, it is not a universal trend and not hence inconclusive whether it is better than a 2.8B model. Moreover, in general we don\\u2019t expect it to outperform a 2x depth baseline. That is why we didn\\u2019t focus too much on the specific numbers on a few evals where the stagfomer model was outperforming the 2x depth baseline.\", \"**Table 1: Strong performance of Stagformer 2.9B model in comparison to Baseline 2.8B model**: As you correctly point out, the strong performance of the StagFormer 2.9B model in comparison to the Baseline 2.8B is due to the presence of the cross attention layers. In general, we expect it to roughly match the performance of the baseline, i.e. the extra cross-attention layers help offset the quality loss from staggering. Indeed, this can be seen in Table 1 where the improvements we see in some columns are very minor over the 2.8B Baseline.\", \"**Measurement of the decoding time**: We apologize for the lack of details in this section. We will add more discussion in this regard. We measured the decoding time using a setup that simulates a Stagformer with 2 stacks running on twice the number of chips used by a baseline model. While we faithfully account for every segment of the StagFormer model, we ignore the inter-chip communication cost between the first and second stacks of the Stagformer. This communication cost is expected to be minimal in practice compared to the other contributor to latency under an optimized hardware setup.\", \"**KV-cache question**: Yes for half of the layers, we would need an extra KV-cache for the cross-attention. Note however that in the setup described above where we use double the number of chips as a baseline model this would not increase the per chip memory load.\"]}", "{\"summary\": \"This paper proposes a novel Transformer architecture called StagFormer designed to improve the efficiency of decoding in Transformer-based language models by enabling the parallel execution of layers along the depth axis.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. StagFormer introduces a unique method to break the sequential dependency of layers in Transformers, enabling parallel execution.\\n2. Experiments demonstrate significant latency reduction while maintaining or even exceeding the quality of a standard Transformer.\\n3. The paper investigates different StagFormer variants, offering flexibility and adaptability to various scenarios and resource constraints.\\n4. The paper effectively explains the StagFormer concept and its variants, supported by clear diagrams and algorithms.\", \"weaknesses\": \"1. Limited exploration of p > 2. While the paper explores StagFormer with more than two stacks, it acknowledges performance degradation and the need for further research in this area.\\n2. The paper mentions the communication cost associated with parallel execution but doesn't offer concrete solutions to mitigate it.\\n3. While the Pile dataset is comprehensive, evaluating on additional datasets would strengthen the generalizability of the findings.\\n4. Comparing StagFormer with other methods for efficient Transformer inference, such as speculative decoding, would provide a more comprehensive perspective.\", \"questions\": \"1. How does varying the depth of individual stacks in StagFormer affect the trade-off between decoding speed and model quality?\\n2. What factors determine the optimal number of stacks for a given application, balancing computational efficiency and performance?\\n3. Could the staggering concept be extended to encoder-decoder Transformers, like those used in machine translation?\\n4. How well could StagFormer be combined with other techniques, like quantization or knowledge distillation, to further enhance decoding efficiency?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposed a new architecture StagFormer, which stagger the time dependency between the lower and upper layers. The overall design seems a little non-intuitive, but has a lot of potential for throughput and performance. For example, parameter sharing or local cross-attention could yield better throughput.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"3\", \"strengths\": [\"StagFormer architecture is interesting, and has very good potential for both performance and throughput.\", \"The idea of parameter sharing and recurrent decoding looks good.\"], \"weaknesses\": [\"I like the concept and potential of this paper, but I believe that this paper is not well-organized, and looks like unfinished work yet. For example, there is missing reference in L.267 (I guess this refers to Table 3), there are a few results for proof of concept.\", \"Table 3 is showing few-shot results for gray, blue, red lines in Figure 4 (correct me if I\\u2019m wrong.) I wonder why shared-weights StagFormer (blue) outperforms Baseline 2.8B params (red) in some benchmarks, even though it shows higher loss values.\", \"What makes StagFormer 2.9B to outperform Baseline 2.8B params in Table 1? Is it due to cross-attention in upper layers? This looks somewhat interesting and also confusing because I thought the changed structure (using previous timestep\\u2019s intermediate activations) could degrade performance a lot.\", \"How did the authors measure the decoding time in Table 2? Running separate parameters in parallel is not trivial, I believe. Is it actual time or hypothetical time by assuming parallel execution of them?\"], \"questions\": [\"For KV-caches, the total KV caches are a little increased by the amount of one layer for cross-attention in upper layers, rights?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer oS5P\", \"comment\": [\"We thank you for your review and feedback on our paper. We hope to address your concerns below.\", \"**Memory bottlenecks during decoding**: Thank you for these questions. We will add a detailed discussion of this topic to clarify some possible misconceptions with our current write-up. We agree that memory access can dominate decoding times. Irrespective of whether we are in a small batch size/context length setting or a large batch size/context length setting, instead of operating a size B Transformer model in parallel on C accelerator chips, we propose serving a size B (approximately) StagFormer model on 2C chips which will reduce the latency of decoding each token without adding additional memory load `on any given chip`. The use of extra hardware chips allows us to be memory usage neutral with respect to the baseline model. Our simulated latency measurement takes this memory load into account. In particular, your claim that \\u201cWhile StagFormer can conceptually parallelize execution of layers, the associated memory access load cannot be parallelized.\\u201d doesn\\u2019t hold here since the additional hardware chips help with the extra memory access required for the cross attention KV caches.\", \"**Stagformer variant used in table 2**: We apologize for the lack of clarity here. The variant we used here is in fact the Separate Weights - **Stagformer 2x18L model** We don\\u2019t run into the memory issues you mentioned because of the explanation given above. Note that we do take a hit compared to the theoretical latency savings of 50% (and only achieve ~33%) because of the additional cross-attention and a few other engineering overheads.\", \"**Misleading task performance of the Recurrent variant**: We apologize for not making this more clear. We will explicitly mention that on scoring tasks recurrent decoding doesn\\u2019t make a difference and the performance remains the same since prefill still goes through 2 stacks.\", \"**Incomplete references**: We apologize for this oversight. We will fix all typos, missing references and also make a pass over the entire paper to polish the writing and clarity of the results.\", \"**Experimental results on model variants are hard to follow**: We apologize for the confusion and will re-organize the sections to make the reading more clear. We will also break down the results into more subsections so as to highlight the main insights within each table more directly.\", \"**Architecture details**: A vocab size of 256k although uncommon for models of this size, is not completely unheard of. For instance, Gemma which has models of a similar size used 256k vocab as well. Our 18 layer baseline uses 1536 model dimension, 12288 as hidden dimension and 12 attention heads. We will add these details to the paper.\", \"**Confusion on time complexity of computing cross-attention**: We apologize for this confusion. We were referring to the time complexity of attention computation during the prefill step wherein we need to compute contextual embeddings for every token in the sequence. This is quadratic normally and can be made linear when choosing a small window size. Consequently, during decoding, the linear cost of attention during decoding steps drops to constant with a constant window size. We will clarify this in the writeup.\"]}", "{\"comment\": \"Thanks for clarifying some misleading points.\\n\\n**1.** I clarified my words: \\\"The experiments provided seem limited in scope, hinting at strong performance only in a few simplified settings. To further validate the robustness, a more extensive evaluation and delicate ablation studies are recommend like exploring across various model sizes, architectures.\\\".\\n\\n**2.** Thanks.\\n\\n**3.** I believe that StagFormer has a great potential by looking at the overall results.\\n\\n**4.** Thanks.\\n\\n**5.** Thanks.\\n\\nWhile I truly believe in the potential of this work and its contribution, I maintain my score as I feel it's not yet ready for publication.\"}", "{\"title\": \"Response to Reviewer TMDV\", \"comment\": [\"**Limited exploration of p>2 setting**: Since this is the first paper introducing this idea, we wanted to focus more on simpler settings where we found the idea to give strong benefits. We tried extending the idea naturally to p>2 and found diminishing returns. Indeed, this is to be expected as one can imagine that for a very large p, the staggering basically forces a small Transformer network to predict p tokens into the future which becomes information theoretically impossible as p increases. (For instance, it might involve answering a question before the question is even asked). We do expect the performance benefits to drop off after some small value of p. We believe the exploration of techniques to avoid this dropoff is out of scope of the current paper.\", \"**Mitigating communication cost associated with parallel execution**: This is an important issue and is a natural by-product of the using the StagFormer idea to speed up execution. Mitigating this can be achieved by optimized hardware setups which enable fast inter-chip communication. This is outside the scope of the current paper though.\", \"**Additional datasets apart from Pile**: We thank the reviewer for raising this point. For language modeling, we believe the Pile is a very comprehensive dataset. In fact, some of the well-known contemporary research papers such as \\u201cMamba: Linear Time Sequence Modeling with Selective State Spaces\\u201d rely on pre-training experiments on the Pile. Pre-training experiments on a dataset the size of Pile take a significant amount of compute resources already and it is not easy to scale up to larger datasets on a limited compute.\", \"**Comparison with other methods for efficient Transformer inference**: Indeed there is plenty of research on making Transformer inference more efficient including the methods the reviewer pointed out such as speculative decoding, quantization, knowledge distillation among others. We don\\u2019t view StagFormer as competing with these other methods, rather StagFormer can be used in conjunction with most of these methods (very naturally with techniques such as knowledge distillation and quantization) to give stronger gains. Due to the tremendous amount of research on methods for optimizing Transformer inference, it is hard to perform a comparative analysis with each of them. We will try to include comparative experiments with a few well-known methods in future drafts.\", \"**Effect of depth on decoding speed and model quality**: Increasing the depth of each StagFormer stack would make the model of higher quality but it would decode slower.\", \"**Factors influencing the optimal number of stacks**: This is a good question. Given a quality threshold that we want to achieve, we believe a small number of stacks (many times just 2) is optimal as too many stacks can start causing severe quality degradations.\", \"**Extending the staggering idea to encoder-decoder transformers**: The staggering idea can be used in the decoder of an encoder-decoder architecture naturally.\"]}" ] }
4R71pdPBZp
Self-Evolving Multi-Agent Collaboration Networks for Software Development
[ "Yue Hu", "Yuzhu Cai", "Yaxin Du", "Xinyu Zhu", "Xiangrui Liu", "Zijie Yu", "Yuchen Hou", "Shuo Tang", "Siheng Chen" ]
LLM-driven multi-agent collaboration (MAC) systems have demonstrated impressive capabilities in automatic software development at the function level. However, their heavy reliance on human design limits their adaptability to the diverse demands of real-world software development. To address this limitation, we introduce EvoMAC, a novel self-evolving paradigm for MAC networks. Inspired by traditional neural network training, EvoMAC obtains text-based environmental feedback by verifying the MAC network's output against a target proxy and leverages a novel textual backpropagation to update the network. To extend coding capabilities beyond function-level tasks to more challenging software-level development, we further propose RSD-Bench, a requirement-oriented software development benchmark, which features complex and diverse software requirements along with automatic evaluation of requirement correctness. Our experiments show that: i) The automatic requirement-aware evaluation in RSD-Bench closely aligns with human evaluations, validating its reliability as a software-level coding benchmark. ii) EvoMAC outperforms previous SOTA methods on both the software-level RSD-Bench and the function-level HumanEval benchmarks, reflecting its superior coding capabilities.
[ "Software development", "LLM", "Multi-agent collaboration" ]
Accept (Poster)
https://openreview.net/pdf?id=4R71pdPBZp
https://openreview.net/forum?id=4R71pdPBZp
ICLR.cc/2025/Conference
2025
{ "note_id": [ "vdx6bC7HrN", "rQMZQZU4ty", "m3A2IIp05X", "lrXvnw78HV", "kMzlT22g0q", "jxsBDVCp8E", "bX00OZSuXV", "bFcW1S4xUX", "WyiykVnDPt", "VXbq0eSSQc", "S423ClRR1P", "QxY7aw3k40", "QYpUIYdvxM", "OceZpyAIL6", "NYx9Lgc5DT", "Ln2NeJKOzv", "Jw6A5D6nqh", "IhTkMAcSoS", "GXfZ0d4UQ6", "F9xd1iaCDu", "CsRyvYiMQM", "B2HCcUXGQY", "A6Z420v5cG", "15NWJ7qUuP" ], "note_type": [ "official_comment", "meta_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732131898865, 1734698904783, 1732131642509, 1732131275161, 1730680169642, 1732635098393, 1732131877582, 1730657920361, 1732736216421, 1737523994705, 1732608216127, 1732131535773, 1732449037552, 1732139533210, 1730722929340, 1732584354779, 1730302334838, 1732131238397, 1732449079043, 1732131341681, 1732448992598, 1732448950345, 1732131580118, 1732131806083 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission9613/Authors" ], [ "ICLR.cc/2025/Conference/Submission9613/Area_Chair_ZWhA" ], [ "ICLR.cc/2025/Conference/Submission9613/Authors" ], [ "ICLR.cc/2025/Conference/Submission9613/Authors" ], [ "ICLR.cc/2025/Conference/Submission9613/Reviewer_32VP" ], [ "ICLR.cc/2025/Conference/Submission9613/Authors" ], [ "ICLR.cc/2025/Conference/Submission9613/Authors" ], [ "ICLR.cc/2025/Conference/Submission9613/Reviewer_aNRL" ], [ "ICLR.cc/2025/Conference/Submission9613/Reviewer_sExs" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission9613/Reviewer_aNRL" ], [ "ICLR.cc/2025/Conference/Submission9613/Authors" ], [ "ICLR.cc/2025/Conference/Submission9613/Area_Chair_ZWhA" ], [ "ICLR.cc/2025/Conference/Submission9613/Authors" ], [ "ICLR.cc/2025/Conference/Submission9613/Reviewer_kPYh" ], [ "ICLR.cc/2025/Conference/Submission9613/Reviewer_32VP" ], [ "ICLR.cc/2025/Conference/Submission9613/Reviewer_sExs" ], [ "ICLR.cc/2025/Conference/Submission9613/Authors" ], [ "ICLR.cc/2025/Conference/Submission9613/Area_Chair_ZWhA" ], [ "ICLR.cc/2025/Conference/Submission9613/Authors" ], [ "ICLR.cc/2025/Conference/Submission9613/Area_Chair_ZWhA" ], [ "ICLR.cc/2025/Conference/Submission9613/Area_Chair_ZWhA" ], [ "ICLR.cc/2025/Conference/Submission9613/Authors" ], [ "ICLR.cc/2025/Conference/Submission9613/Authors" ] ], "structured_content_str": [ "{\"title\": \"Response to Reviewer aNRL\", \"comment\": \"[**Weakness 8**]\\nThe paper lacks specific settings for the Coding Team in both the HumanEval and RSD-Bench benchmarks. Please provide these details to improve clarity on the experimental configuration and consistency across benchmarks.\\n\\n**Answer:** \\nSorry for the confusions. We will provide more details in the revision. \\n1. The coding team is automatically organized by the Coding Organizer agent (Figure 2). For both the HumanEval and RSD-Bench benchmarks, most of the agent prompts remain consistent, ensuring highly comparable experimental settings. The prompts for the Coding Organizer are provided below. \\n\\n2. There are two key differences in the configuration of **default tasks** and **num_agent**. For the default tasks, RSD-Bench requires fundamental capabilities tailored to different software types, such as GUI and logging for games, which are not needed for HumanEval. Regarding **num_agent**, HumanEval focuses on simpler, function-level tasks, so the maximum number of agents is set to 2. In contrast, the more complex RSD-Bench tasks allow for up to 5 agents to accommodate the increased complexity.\\n\\n> **Coding organzier prompt**\\n> According to the new user's task and our software designs listed below: \\n> Task: \\\"\\\\{task\\\\}\\\".\\n> Task description: \\\"\\\\{description\\\\}\\\".\\n> Modality: \\\"\\\\{modality\\\\}\\\".\\n> Programming Language: \\\"\\\\{language\\\\}\\\"\\n> Requirements analysis: \\\"\\\\{requirements\\\\}\\\"\\n> Ideas:\\\"\\\\{ideas\\\\}\\\"\\n> Coding plan: \\\"\\\\{codes\\\\}\\\"\\n> Your goal is to organize a coding team to complete the software development task.\\n> There are two **default tasks: ###**\\n> Besides these tasks, you should pay attention to the unachieved requirements and think step by step to formulate the requirements into concrete tasks.\\n> You should follow the following format: \\\\\\\"COMPOSITION\\\\\\\" is the composition of tasks, and \\\\\\\"Workflow\\\\\\\" is the workflow of the programmers. Each task is assigned to a programmer, and the workflow shows the dependencies between tasks. \\n> \\\\#\\\\#\\\\# COMPOSITION\\n> ```\\n> Task 1: Task 1 description\\n> Task 2: Task 2 description\\n> ...\\n> ```\\n> \\\\#\\\\#\\\\# WORKFLOW\\n> ```\\n> Task 1: []\\n> Task 2: [Task 1]\\n> ...\\n> ```\\n> Please note that the decomposition should be both effective and efficient.\\n> 1) Each decomposed task should include the related the functions. The task description should be clear and concise. \\n> 2) The composition should be kept as small as possible! (LESS THAN **\\\"\\\\{num\\\\_agents\\\\}\\\"**). If there are more than 5 tasks, consider merging the tasks and focus on the most essential features. \\n> 3) The decomposed tasks should fully cover the task definitions.\\n> 4) The workflow should not contain circles!\\n\\n---\\n\\n[**Weakness 9**]\\nCould the authors showcase additional examples of the textual gradient analysis and the updating process during the evolution for HumanEval and RSD-Bench?\\n\\n**Answer:** \\n\\nSorry for the confusion. We show the updating process on RSD-Bench and HumanEval in the Tab 17 and 20 in the updated appendix. We can see that the updating agent will adjust the job of each coder dynamically according to the result of test team.\"}", "{\"metareview\": \"The work describes an approach to automated software development. The approach is tested on one benchmark that the paper introduces (RSD-Bench) and on one preexisting benchmark (HumanEval).\\n\\nThe main strength of the submission is that the approach works well in practice, on the two benchmarks.\\n\\nThe paper has the following weaknesses.\\n- I am not sure the paper is in scope for ICLR. Because of the nature of the paper where it focuses on one application only, I am wondering if perhaps a more applied conference might be a better fit.\\n- I am not sure if the paper is reproducible. The paper is heavily empirical so this is even more key than normally. There doesn't seem to be code.\\n- The paper isn't clear enough. For example, it uses the concepts of \\\"textual gradient\\\" and \\\"gradient analysis operator\\\" without first properly defining them in the paper text. From the perspective of an ICLR audience, such lack of rigour is a problem.\\n- I am not sure why one needs a multi-agent framework to solve a single-agent task (even though I accept the approach works well).\\n\\nSince reviewers are convinced about the quality of the paper, I recommend acceptance.\", \"additional_comments_on_reviewer_discussion\": \"I think that the paper has fundamental weaknesses and should be rejected despite the positive reviews (see list of weaknesses in the metareview).\\n\\nI tried discussing this with the reviewers, but didn't get any replies.\\n\\nI am reluctantly recommending acceptance since I don't think I can overrule all reviewers.\"}", "{\"title\": \"Response to Reviewer aNRL\", \"comment\": \"We sincerely appreciate your thoughtful comments. Below, we respond to each point in detail. If our responses adequately address your concerns, we would be truly grateful if you would consider raising your score. We also remain open to any further discussions that could contribute to enhancing the quality of our paper.\\n\\n[**Weakness 1**] \\nExisting studies, such as [1-4] have also explored automation in LLM-based multi-agent collaboration. Please compare the differences between EvoMAC and these works.\\n\\n**Answer:** \\nThanks for the valuable suggestions. Following previous works[1,2,3,4], we model multi-agent collaboration as a network/graph and provide a detailed comparison in the table below. Compared to previous approaches, EvoMAC offers three distinct advantages.\\n\\n1. EvoMAC jointly optimizes both nodes and edges, while previous approaches either rely on predefined, non-optimizable solutions [1,3,4] or support only separate optimization of nodes or edges[2]. Predefined collaboration structures are limited by human design, as a single structure cannot adapt to all scenarios. Separate optimization of nodes or edges is also suboptimal because both agent roles (nodes) and their connections (edges) are crucial to task completion. For example, in a coding task, each agent's role and the dependencies among agents are essential; if agents (nodes) are not properly optimized, subtasks may be incorrectly implemented, and if connections (edges) are not optimized, agents may conflict by modifying the same code, resulting in reduced performance. Our joint node and edge optimization offers greater flexibility, enabling more effective multi-agent collaboration.\\n\\n2. EvoMAC uses external tools to provide informative, objective textual feedback. In contrast, existing methods [1, 2, 3] primarily rely on scalar feedback or offer no feedback at all [4], as textual feedback is often unavailable for most tasks. Textual feedback, however, not only validates the effectiveness of the system like scalar feedback but also identifies specific errors within the collaboration system and offers actionable guidance to improve multi-agent collaboration.\\n\\n3. EvoMAC proposes a novel textual backpropagation method for system optimization while existing approaches rely primarily on heuristic design or scalar-based reinforcement learning (RL) techniques. Heuristic design lacks flexibility, leading to suboptimal performance. RL techniques rely on a single numerical value to optimize the entire complex multi-agent system, making optimization extremely challenging and demonstrating inferior optimization quality. Our novel textual backpropagation method leverages detailed textual feedback and strategically categorizes errors and modification suggestions for each node and edge, making optimization more attainable and demonstrating superior performance.\\n\\n\\n| Method | Node (Agent) | Edge (Agent connection) | Feedback | Tool | Optimizer | Test time evolving |\\n| ----------- | -------------------- | ----------------------- | ------------------------------------- | ------------ | ----------------------- | ------------------ |\\n| DyLAN[1] | Predefined | Optimizable | Scalar (Heuristical design) | - | Heuristical design | $\\\\checkmark$ |\\n| GPTSwarm[2] | Seperately optimized | Seperately optimized | Scalar (Objective performance) | - | RL | - |\\n| ADAS[3] | Searched | Predefined | Scalar (Objective performance) | - | Heuristical design | $\\\\checkmark$ |\\n| MacNet[4] | Predefined | Predefined | - | - | - | - |\\n| EvoMAC | Jointly optimized | Jointly optimized | Text (Objective environment feedback) | $\\\\checkmark$ | Textual backpropogation | $\\\\checkmark$ |\\n \\n---\\n\\n[**Weakness 2**] \\nEvoMAC's updating process includes removing agents that have completed their tasks. Can the entire agentic workflow be replayed once a task is finished, or is the removed agent permanently excluded from further iterations?\\n\\n**Answer:** \\nSorry for the confusion. We will provide more details in the revision. Specifically, the removed agent remains part of the agentic workflow. However, if its subtask is marked as completed, the agent will not be executed in that iteration. This allows the final agentic workflow to be replayed by executing all agents in the workflow as needed.\\n\\n---\"}", "{\"title\": \"Response to Reviewer kPYh\", \"comment\": \"[**Question 3**]\\nIn Table 2, there is no comparison of results with environment tools but without evolving. This should be added.\\n\\n**Answer:** \\n\\nThank you for the suggestions. We would like to provide clarification from two aspects.\\n\\n1. The results with environmental tools but without evolution are shown in Figure 6, representing the performance when the evolving time is set to 0. \\n\\n2. We have included the results in Table 2. The data demonstrates that evolving leads to performance gains of 23.28%, 27.93%, 9.44%, and 9.67% on Web-Basic, Web-Advanced, Game-Basic, and Game-Advanced, respectively.\\n\\n||Coding|Testing|Evol.| Env.| Web-Basic| Web-Advanced | Game-Basic | Game-Advanced | \\n|-|-|-|-|-|-|-|-|-| \\n|g|Multi|Multi|$\\\\checkmark$|$\\\\checkmark$|90.75|67.20|77.54|51.60| \\n|h|Multi|Multi|-|$\\\\checkmark$|67.47|39.27|68.10|41.93|\"}", "{\"summary\": \"The paper introduces a novel framework called EvoMAC, aimed at enhancing the capabilities of LLM-driven multi-agent collaboration (MAC) systems in software development. The authors argue that traditional MAC systems are heavily reliant on human-designed workflows, which limits their adaptability and performance in real-world scenarios. EvoMAC seeks to overcome these limitations by enabling self-evolution of agents and their connections during task execution.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"This self-evolving paradigm allows MAC networks to adapt iteratively based on environmental feedback. The framework employs a mechanism similar to neural network backpropagation, where the output of the MAC network is verified against a target proxy, facilitating continuous learning and improvement.\", \"The RSD-Bench provides a structured benchmark for software-level coding tasks, focusing on comprehensive requirements rather than isolated functions.\", \"By incorporating unit tests and compilers as feedback mechanisms, EvoMAC reduces subjectivity and provides reliable feedback, which is critical for verifying the correctness of generated code. The objective environment-based feedback is an effective alternative to critique agents, which can introduce bias and hallucinations.\"], \"weaknesses\": [\"The EvoMAC framework introduces significant complexity with its multi-agent setup, dynamic adjustments, and textual backpropagation mechanism. This complexity may limit the framework's accessibility and implementation ease for real-world adoption outside of specialized research contexts.\", \"Although EvoMAC performs well with large models like GPT-4o-Mini, its performance with smaller or less capable models is unclear. This reliance may restrict its applicability, particularly in environments with limited computational resources.\", \"RSD-Bench focuses on website and game software types, which may not comprehensively represent the diversity of real-world software development tasks. Expanding the evaluation to include other domains, such as enterprise applications or data processing software, would enhance the generalizability of the results.\"], \"questions\": \"1. How sensitive is EvoMAC to the quality and specificity of feedback from unit tests? If the unit tests are incomplete or overly general, would EvoMAC still produce reliable code, or would it require stricter validation criteria?\\n2. Can EvoMAC work effectively with models of different sizes, or does it rely on the power of high-capacity LLMs? Would it perform satisfactorily with smaller models that might be more efficient in constrained environments?\\n3. Can the self-evolving mechanism be applied to other domains outside software development? If yes, how?\\n4. Given EvoMAC\\u2019s iterative approach, how would it handle larger software projects with thousands of lines of code and extensive requirements? Are there specific design considerations for scaling it to more extensive projects?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer 32VP\", \"comment\": \"Thank you for your thoughtful review and for taking the time to consider our responses. We appreciate your engagement and feedback. Could you kindly share the remaining concerns that led to your decision not to raise the score? Understanding this would greatly help us improve our work further. Thank you again for your time and insights.\"}", "{\"title\": \"Response to Reviewer aNRL\", \"comment\": \"[**Weakness 6**]\\nThe Testing Team\\u2019s performance significantly impacts the Coding Team's potential, particularly in the HumanEval benchmark. How is the Testing Team\\u2019s performance evaluated to ensure alignment with target performance objectives and to prevent divergence? Additionally, how is the Testing Team\\u2019s performance quantified within RSD-Bench?\\n\\n**Answer:** \\nTo address the reviewer's confusion, we provide clarification from the following three perspectives.\\n\\n1. On HumanEval, we manually design prompts for the testing agent to mimic the given test case examples, prioritizing precision to ensure accuracy over completeness. This approach ensures that the test-case-based environment feedback remains highly accurate while minimizing potential negative impacts on code generation. The testing performance is primarily evaluated automatically, with a small portion manually assessed. Given HumanEval's high accuracy, we treat the generated code that passes the evaluation test cases as ground-truth code. Generated test cases are marked as incorrect if the ground-truth code fails to pass them. For cases where the code does not pass, we manually verify the accuracy of the test cases.\\n\\n2. To quantify testing performance within RSD-Bench, we manually assessed the accuracy of the generated test cases to determine if they correctly reflect the requirements. The overall accuracy ranges from approximately 70% to 80%. These test cases are effective in verifying the completeness of requirements. While not perfect, they significantly contribute to the evolution process, enabling notable performance gains of 23.28%, 27.93%, 9.44%, and 9.67% on Web-Basic, Web-Advanced, Game-Basic, and Game-Advanced tasks, respectively. \\n\\n3. To facilitate automatic verification of testing performance, we are developing a more comprehensive benchmark that incorporates ground-truth code. The inclusion of ground-truth code will enable automatic assessment of test case accuracy, as properly implemented code should pass all valid test cases unless the test cases themselves are flawed.\\n\\n---\\n\\n[**Weakness 7**] \\nThe paper does not specify the stopping criteria for EvoMAC\\u2019s iterative evolution process. Could the authors provide details on the stopping mechanism or criteria?\\n\\n**Answer:** \\nSorry for the confusion. We will provide more details in the revision. Specifically, the process employs two stopping criteria: \\n1. The iteration limit is reached, which is set between 3 and 5 iterations to balance effectiveness and efficiency. \\n2. All test cases are successfully passed, signifying that all requirements have been met and no further iterations are needed.\"}", "{\"summary\": \"The paper presents EvoMAC, a self-evolving multi-agent collaboration (MAC) network designed to advance LLM-based multi-agent systems beyond function-level tasks to software-level coding tasks. EvoMAC employs a unique textual backpropagation mechanism to iteratively update agents and their connections in response to text-based environmental feedback, effectively enhancing task completion without human intervention. By formulating the evolving process to analogize neural network training, EvoMAC provides a clear structure for defining and extracting improvements. This approach underscores the significance of iterative refinement in the software generation process, enabling continuous improvement and adaptability in complex coding tasks.\\n\\nTo evaluate EvoMAC, the authors introduce RSD-Bench, a novel benchmark with complex and diverse software requirements that includes automated requirement verification. EvoMAC demonstrates superior performance on RSD-Bench and the HumanEval function-level benchmark, outperforming state-of-the-art methods and showcasing its robustness and adaptability across various evolving iterations and LLM configurations.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": [\"Effectively demonstrates EvoMAC's evolution process as analogous to neural network training, establishing a reliable target proxy for evaluation and constructing a clear objective.\", \"The paper is well-structured and easy to follow, with thorough explanations of EvoMAC\\u2019s self-evolving process,the design of the RSD-Bench, and detailed descriptions of experimental procedures. Figures and benchmarks illustrate the methodology effectively, aiding comprehension.\", \"By addressing limitations in traditional MAC systems and demonstrating EvoMAC\\u2019s efficacy on challenging software-level tasks, this work sets a promising precedent for adaptive agent frameworks in automated software development, making it valuable for both research and practical applications.\", \"The RSD-Bench is more practical in software generation evalaution, as it aligns closely with real-world software development process. By incorporating unit tests at both the task and function levels, it establishes a rigorous and precise mechanism for evaluating software generation quality. Additionally, RSD-Bench demonstrates strong human alignment (0.9922), providing a reliable evaluation metric. This paper also conducts analysis the reasonality of RSD-Bench in comparison to existing benchmarks, demonstrates its necessity.\", \"This paper provides thorough experiments and analyses that robustly demonstrate EvoMAC\\u2019s effectiveness and performance. EvoMAC\\u2019s strong performance on both the RSD-Bench and HumanEval benchmarks highlights its high quality and efficacy in handling complex coding tasks.\"], \"weaknesses\": [\"Existing studies, such as [1-4] have also explored automation in LLM-based multi-agent collaboration. Please compare the differences between EvoMAC and these works.\", \"EvoMAC's updating process includes removing agents that have completed their tasks. Can the entire agentic workflow be replayed once a task is finished, or is the removed agent permanently excluded from further iterations?\", \"Given that EvoMAC includes multiple evolutionary iterations, direct comparisons with standard multi-agent frameworks may not be entirely fair. Could you also provide the number of LLM calls for tasks in RSD-Bench? This metric would offer a more clearer understanding of EvoMAC\\u2019s performance.\", \"EvoMAC primarily focuses on models like GPT-4, Claude 3.5, and Gemini, but it is unclear if the framework can adapt to less powerful models, such as GPT-3.5 or open-source options like DeepSeek. Presenting results across a broader range of LLMs would support EvoMAC\\u2019s claims of robustness and adaptability.\", \"Can the authors provide additional examples of the unit tests designed within RSD-Bench?\", \"The Testing Team\\u2019s performance significantly impacts the Coding Team's potential, particularly in the HumanEval benchmark. How is the Testing Team\\u2019s performance evaluated to ensure alignment with target performance objectives and to prevent divergence? Additionally, how is the Testing Team\\u2019s performance quantified within RSD-Bench?\", \"The paper does not specify the stopping criteria for EvoMAC\\u2019s iterative evolution process. Could the authors provide details on the stopping mechanism or criteria?\", \"The paper lacks specific settings for the Coding Team in both the HumanEval and RSD-Bench benchmarks. Please provide these details to improve clarity on the experimental configuration and consistency across benchmarks.\", \"Could the authors showcase additional examples of the textual gradient analysis and the updating process during the evolution for HumanEval and RSD-Bench?\", \"[1] Liu Z, Zhang Y, Li P, et al. Dynamic llm-agent network: An llm-agent collaboration framework with agent team optimization[J]. arXiv preprint arXiv:2310.02170, 2023.\", \"[2] Zhuge M, Wang W, Kirsch L, et al. GPTSwarm: Language Agents as Optimizable Graphs[C]//Forty-first International Conference on Machine Learning.\", \"[3] Hu S, Lu C, Clune J. Automated design of agentic systems[J]. arXiv preprint arXiv:2408.08435, 2024.\", \"[4] Qian C, Xie Z, Wang Y, et al. Scaling Large-Language-Model-based Multi-Agent Collaboration[J]. arXiv preprint arXiv:2406.07155, 2024.\"], \"questions\": \"Please refer to the questions in Weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your responses. My questions have been addressed, and I will keep my score and acceptance. I repeat my comments on the visualization problems for accessibility, but they don't affect my original score.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"comment\": \"Thank you for the authors' detailed responses, most of my concerns have been addressed, and I will keep my score.\"}", "{\"title\": \"Response to Reviewer 32VP\", \"comment\": \"[**Weakness 3**]\\nRSD-Bench focuses on website and game software types, which may not comprehensively represent the diversity of real-world software development tasks. Expanding the evaluation to include other domains, such as enterprise applications or data processing software, would enhance the generalizability of the results.\\n\\n**Answer:** \\nTo address the reviewer's concerns about RSD-Bench's diversity and EvoMAC's generalizability, we offer clarification from the following three perspectives.\\n\\n1. Games and websites represent two typical software types, and the testing covers 11 distinct categories. These categories expand beyond previous benchmarks focused on function completion and bug fixing to provide a more comprehensive evaluation of coding capabilities. We are extending the benchmark to include additional software types. \\n\\n2. EvoMAC is validated across three distinct software development tasks\\u2014function completion, website, and game, where it outperforms prior works, demonstrating its effectiveness and generalizability.\\n\\n3. We extend the evaluation of EvoMAC to include data processing tasks by utilizing the InfiAgent-DABench[1] dataset (hard). The table below shows that EvoMAC enhances the performance in handling complex data analysis tasks, showcasing its effectiveness and generalizability to diverse software types.\\n\\n| Method | Accuracy |\\n| - | - |\\n| Single | 46.50% |\\n| MapCoder | 75.00% |\\n| ChatDev | 62.50% |\\n| EvoMAC | 82.50% |\\n\\n[1] Hu, Xueyu, et al. \\\"InfiAgent-DABench: Evaluating Agents on Data Analysis Tasks.\\\", ICML 2024\\n\\n---\\n\\n[**Question 1**] \\nHow sensitive is EvoMAC to the quality and specificity of feedback from unit tests? If the unit tests are incomplete or overly general, would EvoMAC still produce reliable code, or would it require stricter validation criteria?\\n\\n**Answer:** \\n\\nTo address the reviewer's concerns, we provide clarification from the following three perspectives.\\n\\n1. EvoMAC demonstrates relative robustness to unit tests. We randomly sampled 100 generated test cases and manually evaluated their accuracy, as shown in the table below. The results indicate that even when testing accuracy is around 50%, the feedback remains valid and contributes to measurable performance gains.\\n\\n2. We manually evaluate testing performance within RSD-Bench, with the overall accuracy ranging from approximately 70% to 80%. These test cases effectively verify the completeness of requirements. Although they do not perfectly cover all code and requirements, they play a significant role in the evolution process, resulting in notable performance gains of 23.28%, 27.93%, 9.44%, and 9.67% on Web-Basic, Web-Advanced, Game-Basic, and Game-Advanced tasks, respectively. \\n\\n3. To enable automatic verification of testing performance, we are developing a more comprehensive benchmark that includes ground-truth code. Incorporating ground-truth code will allow for automatic assessment of test case accuracy, as properly implemented code should pass all valid test cases unless the test cases themselves are flawed. This approach will facilitate a more thorough evaluation of testing capabilities and enable adaptive validation criteria, providing more accurate and complete feedback.\\n\\n| | Testcase Accuracy | Performance Gain |\\n| -| - |-|\\n| Basic | 55/72(76.38%) | +4.35% (69.56%-73.91%) |\\n| Advanced | 15/32(46.87%) | +3.22% (45.16%-48.38%) | \\n ---\\n\\n[**Question 3**] \\nCan the self-evolving mechanism be applied to other domains outside software development? If yes, how?\\n\\n**Answer:** \\nYes, our self-evolving mechanism can be applied to domains where informative feedback is available. For instance, in the auto-research domain [1,2], which focuses on developing AI agents for automated machine learning tasks, such as dataset creation, algorithm development, and model training. Here, the validation set serves as the target proxy, with feedback provided by validation performance. Analyzing results on the validation set offers valuable insights into the algorithm's effectiveness and can guide further modifications. The self-evolving mechnism iteratively adjusts algorithm generation, collects feedback, and analyzes results, promising advanced automated machine learning performance.\\n\\n[1] MLE-bench: Evaluating Machine Learning Agents on Machine Learning Engineering, OpenAI 2024\\n[2] MLAgentBench: Evaluating Language Agents on Machine Learning Experimentation\\n\\n---\"}", "{\"title\": \"From AC.\", \"comment\": \"Reviewer aNRL: if possible, can you reply to the rebuttal?\"}", "{\"title\": \"Response to Reviewer sExs\", \"comment\": \"We sincerely appreciate your thoughtful comments. Below, we respond to each point in detail. If our responses adequately address your concerns, we would be truly grateful if you would consider raising your score. We also remain open to any further discussions that could contribute to enhancing the quality of our paper.\\n\\n[**Weakness 1**] \\nI truly believe that the mathematical explanation of the problem (mainly between lines 169-184) is unnecessary, even though I see that this somehow facilitates some later explanations. Although it provides some kind of generalization, the approach still relies on LLMs that are probabilistic by nature and then are not mathematically generalizable (i.e., the function is essentially a message sent to an LLM, so it does not behave exactly as a mathematical generic function as the authors may want to demonstrate). I advise the removal of the mathematical explanation. It can help sometimes but does not add value to the paper. \\n\\n**Answer:** Sorry for the confusion. The mathematical formulation is incorporated as an objective, drawing an analogy to neural network optimization. This addition aims to help readers better understand the target proxy and its role in motivating the subsequent textual backpropagation-based self-evolving solution. To ensure greater precision and clarity, we will revise this mathematical section to highlight this analogy feature and avoid claiming a mathematical generic function.\\n\\n---\\n\\n[**Weakness 2**] \\nThe paper lacks an actual example/use case in the opposite (or complementary, if the authors decide to keep it) of the mathematical explanation. The authors use this type of explanation in lines 270-271. This kind of example can be used as a running example throughout the paper.\\n\\n**Answer:** \\nThanks for the suggestions. We provide a detailed example to clarify each symbol and operation in Tab.4 in the updated appendix.\\n\\n---\\n\\n[**Weakness 3**] \\nTypo and visualization suggestions.\\n\\n**Answer:** Thanks for the suggestions. We will fix the typos and improve the visualization!\\n\\n---\\n\\n[**Question 1**] \\nWhen explaining their approach (section 3.1), the authors did not mention problems regarding the context window of most LLMs. Since requirements can contain quite a large amount of textual data, is the approach capable of dealing with it without extra techniques (e.g., RAG)? If not, even though the authors did not highlight it as a problem, the context window limitations should be mentioned.\\n\\n**Answer:** \\nEvoMac can manage extensive requirements without relying on additional techniques. Unlike a single agent, EvoMAC is much less affected by the context window limitations.\\n\\n1. EvoMAC achieves this through the multi-agent collaboration approach. Multi-agent collaboration effectively mitigates context window limitation issues by breaking down complex and lengthy requirements into smaller, more manageable subtasks. These subtasks fit within the context window of individual agents, enabling them to address specific aspects of the task. Gradually, the collective efforts of multiple agents allow EvoMac to fulfill the overall extensive task requirements.\\n2. Figure 7 shows that a single agent fails when requirements become too lengthy, due to the limited context window. In contrast, EvoMAC (powered by the same language model) maintains superior performance even as requirements lengthen. This demonstrates that the context window limitations do not severely impact EvoMAC, allowing it to effectively address longer requirements.\"}", "{\"summary\": \"This paper proposes a multi-agent collaboration approach to address software development problems. EvoMAC obtains text-based environmental feedback by verifying the match between the MAC network's output and the target proxy, and it updates the network using a novel textual backpropagation technique, thereby achieving the final development outcome. Additionally, this paper introduces a software development benchmark called RSD-Bench, which provides more detailed and structured software requirements for documenting user needs compared to previous benchmarks. The final experimental results show that the proposed EvoMAC outperforms other single-agent and multi-agent methods on both RSD-Bench and HumanEval.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"1. This paper proposes a multi-agent collaboration approach to address software development problems.\\n2. This paper introduces a software development benchmark called RSD-Bench, which provides more detailed and structured software requirements for documenting user needs compared to previous benchmarks.\\n3. Extensive experiments demonstrate that EvoMAC outperforms other single-agent and multi-agent methods on both RSD-Bench and HumanEval.\", \"weaknesses\": \"1. The benchmark proposed in this paper lacks data analysis and some basic statistical information, such as prompt length, the number of final generated files/functions, etc.\\n2. The benchmark proposed in this paper is relatively easy, with the EvoMAC method already achieving around 90% accuracy.\", \"questions\": \"1. Are there any issues with the citation format in the paper?\\n2. Does the paper lack an appendix?\\n3. In Table 2, there is no comparison of results with environment tools but without evolving. This should be added.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for addressing my questions. After reviewing your responses, I have decided to maintain my initial score.\"}", "{\"summary\": \"The authors provide a paper with twofold contribution: i) a new approach to developing multi-agent collaboration systems for software development and ii) a benchmark to compare their approach with existing approaches to solving the same problem.\\nRegarding (i), their approach (called EvoMAC) intends to overcome the limitations of similar multi-agent collaboration systems development workflows, mainly on adaptability and generalization. EvoMAC was designed to mimic standard neural network development, meaning that the errors are \\\"backpropagated\\\" throughout the agents, creating a self-adaptive multi-agent network.\\nRegarding (ii), the benchmark dataset RSD-bench was created based on the requirements of the software being developed, in contrast to the existing ones, which are usually based on the functionality of the generated code/software (i.e., unit tests).\\nThe paper's results show that the EvoMAC approach outperforms other approaches when applying the RSD-Bench to adapted versions of standard benchmark datasets like HumanEval.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"The paper is very well written and clearly explained. The two proposed contributions (the EVoMAC approach and RSD-Bench dataset) are consistent with the objective expressed in the introduction.\\nAltough the problem of how to (self-)organize multi-agents is not exactly a new problem, the rise of agentic approaches using LLMs brought new traction to this challenge, and the authors addressed a pain point on these approaches when dealing with complex software development. Indeed, most solutions rely only on function-level and ignore the requirements engineering perspective, which ultimately leads the developers to half-baked solutions. This is not the case with the proposed EvoMAC. It takes account of the particularities of user requirements when organizing the agents initially and also considers the evaluation of the generated code against initial requirements.\\nTheir inspiration for neural network algorithms is broad. However, it is also a clever idea that sounds original and demonstrates a creative adaptation of backpropagation principles to multi-agent collaboration.\\nThe authors also provide sound experimentation on how they implement their approach and when comparing with similar solutions.\\nTogether with the approach, the authors provided a well-defined benchmark to overcome the limitations of the existing ones. A more detailed description of the RSD-bench could indeed compose a contribution per se.\\nThe RSD bench is tailored to requirements engineering, which could address common limitations in agent-based collaboration research by bridging functionality-based assessments with requirement-based evaluations.\", \"weaknesses\": [\"Most of the paper's weaknesses are minor problems that can improve its quality, even though I don't consider them mandatory.\", \"I truly believe that the mathematical explanation of the problem (mainly between lines 169-184) is unnecessary, even though I see that this somehow facilitates some later explanations. Altough it provides some kind of generalization, the approach still relies on LLMs that are probabilistic by nature and then are not mathematically generalizable (i.e., the function $\\\\phi(\\\\cdot,\\\\cdot)$ is essentially a message sent to an LLM, so it does not behave exactly as a mathematical generic function as the authors may want to demonstrate). I advise the removal of the mathematical explanation. It can help sometimes but does not add value to the paper.\", \"The paper lacks an actual example/use case in the opposite (or complementary, if the authors decide to keep it) of the mathematical explanation. The authors use this type of explanation in lines 270-271. This kind of example can be used as a running example throughout the paper.\", \"Minor problems:\", \"Typo on the Figure 3 caption. \\\"indrection\\\"\", \"Figures 6 and 8 can be improved to ensure legibility and accessibility, maybe by adjusting font size and contrast.\", \"Figure 1 is a bit confusing. I think it can be split into 3 different figures or be better explained in the paper. If kept as is, I suggest adding brief explanations of the arrows, such as \\\"add,\\\" \\\"revise,\\\" and \\\"remove.\\\"\", \"Figure 5 can be rethought with a better color choice, especially considering the accessibility of the paper.\"], \"questions\": [\"When explaining their approach (section 3.1), the authors did not mention problems regarding the context window of most LLMs. Since requirements can contain quite a large amount of textual data, is the approach capable of dealing with it without extra techniques (e.g., RAG)? If not, even though the authors did not highlight it as a problem, the context window limitations should be mentioned.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer kPYh\", \"comment\": \"We sincerely appreciate your thoughtful comments. Below, we respond to each point in detail. If our responses adequately address your concerns, we would be truly grateful if you would consider raising your score. We also remain open to any further discussions that could contribute to enhancing the quality of our paper.\\n\\n[**Weakness 1**] \\nThe benchmark proposed in this paper lacks data analysis and some basic statistical information, such as prompt length, the number of final generated files/functions, etc.\\n\\n**Answer:** \\nThank you for the suggestions. We provide more detailed data analysis and statistical information in the following parts.\\n1. The data analysis of the benchmark is in the supplementary material: Table 1 and Figure 1. Note that the Supplementary Material is submitted as a separate PDF, which can be downloaded by clicking the down-arrow button labeled \\\"Supplementary Material\\\" (below the abstract). For convenience, we also provide a copy of Table 1 below.\\n - Table 1 presents the number of software samples, test cases, and the average/maximum lengths of requirement prompts for both websites and games. We see that: i) our software-level benchmark is diverse, encompassing two common types of software (Websites and Games) as well as two levels of difficulty (Basic and Advanced); and ii) our software-level benchmark is notably challenging, with an average prompt length exceeding 500 on Game and 1000 on Website, making it almost 5 and 10 times longer than the function-level HumanEval benchmark separately.\\n\\n - Figure 1 illustrates the distribution of test case types across both categories. We see that our software-level benchmark includes 11 distinct test case categories and a total of 616 test cases, representing a scale 6 times larger than the function-level HumanEval benchmark. This extensive set enables a more focused and thorough evaluation of code generation capabilities.\", \"table_1\": \"Basic statistics for website and game domains, including the amount of samples, the task prompt length (Average/Max), and number of test cases at both Basic and Advanced levels.\\n\\n| Benchmark | Amount | Prompt length(token)| Testcase(Basic) | Testcase(Advanced) |\\n|-|-|-|-|-|\\n|Website|45| 1011/1553|292|247|\\n|Game| 8| 507/788|46 |31|\\n|Humaneval|164|131/398|/|/|\\n\\n2. We provide more detailed statistical information in the table below, including code length, file count, and function count. We see that the generated code consists of over six functions across multiple files, showcasing advanced capabilities in generating complex, realistic code that requires function coordination and synchronization across files. This suggests that: i) our software-level benchmark closely reflects the advanced coding skills needed for real-world coding tasks, and ii) the proposed multi-agent collaboration system effectively supports the generation of more complex code.\\n\\n\\n|Statistical Information| Game | Website |\\n|-|-|-|\\n|Generated function count| 6.25$\\\\pm$ 1.30| 10.67$\\\\pm$ 3.39|\\n| Generated file count| 1.38$\\\\pm$ 0.70 | 6.53$\\\\pm$ 1.61|\\n| Generated code length| 158$\\\\pm$ 17.13 | 276$\\\\pm$ 227.72|\\n\\n---\\n\\n[**Weakness 2**] \\nThe benchmark proposed in this paper is relatively easy, with the EvoMAC method already achieving around 90% accuracy.\\n\\n**Answer:** \\nTo address the reviewer's concern, we would like to clarify that the benchmark is challenging from two aspects.\\n\\n1. Our benchmark includes two difficulty levels: Basic and Advanced. While the Basic level achieves around 90% accuracy, the Advanced level proves challenging, with accuracy around 50%.\\n\\n2. The Advanced level reflects more complex software functionalities, such as game logic rules and dynamic web content management. Passing each advanced test case is challenging and demands a fundamental improvement in coding capabilities. As meeting a single advanced requirement involves synchronized implementation and function calls across multiple files. For instance, fulfilling the advanced requirement of \\\"eating a mushroom and earning points in Mario\\\" requires checking the locations of the mushroom and Mario, adjusting the game score accordingly, and updating the visualization. It requires implementing multiple interdependent functions, demanding advanced coding capabilities to avoid conflicts.\\n\\n\\n---\\n\\n[**Question 1**] \\nAre there any issues with the citation format in the paper?\\n\\n**Answer:** \\nThanks for pointing out. We will fix it.\\n\\n---\\n\\n[**Question 2**] \\nDoes the paper lack an appendix?\\n\\n**Answer:** \\nWe apologize for any confusion regarding the appendix. The appendix has been submitted as a separate PDF and can be accessed by clicking the down-arrow button labeled \\\"Supplementary Material\\\" below the abstract.\\n\\n---\"}", "{\"title\": \"From AC.\", \"comment\": \"Reviewer sExs: if possible, can you reply to the rebuttal?\"}", "{\"title\": \"Response to Reviewer 32VP\", \"comment\": \"We sincerely appreciate your thoughtful comments. Below, we respond to each point in detail. If our responses adequately address your concerns, we would be truly grateful if you would consider raising your score. We also remain open to any further discussions that could contribute to enhancing the quality of our paper.\\n\\n[**Weakness 1**] \\nThe EvoMAC framework introduces significant complexity with its multi-agent setup, dynamic adjustments, and textual backpropagation mechanism. This complexity may limit the framework's accessibility and implementation ease for real-world adoption outside of specialized research contexts.\\n\\n**Answer:** \\nThanks for the time in reviewing. To address the reviewer's concerns about EvoMAC's applicability in real-world scenarios, we offer clarification from the following four perspectives.\\n1. In real-world scenarios, software development may require hundreds of engineers working collaboratively over months or even years to achieve completion. The primary focus within the software development community is on improving the effectiveness of the development process, advancing from function-level requirements to more extensive software-level requirements. \\n2. Multi-agent collaboration is a common practice in automatic software development [1,2], often involving tens or even hundreds of agents to enhance performance. EvoMAC's demonstrated effectiveness makes it well-suited to handling the complexities of real-world software development. \\n3. EvoMAC's approach, which includes a multi-agent setup, dynamic adjustments, and a textual backpropagation mechanism, is fully automated. It eliminates the need for human intervention or specific heuristic designs, ensuring it is highly adaptable to real-world scenarios. \\n4. EvoMAC performs effectively even when powered by smaller 14B models, as shown in the table in **Weakness 2 and Question 2**. This makes it feasible for deployment in resource-constrained real-world environments, offering an efficient solution without compromising performance.\\n\\n[1] Zhuge M, Wang W, Kirsch L, et al. GPTSwarm: Language Agents as Optimizable Graphs[C]//Forty-first International Conference on Machine Learning.\\n[2] Qian C, Xie Z, Wang Y, et al. Scaling Large-Language-Model-based Multi-Agent Collaboration[J]. arXiv preprint arXiv:2406.07155, 2024.\\n\\n---\\n\\n[**Weakness 2 and Question 2**] \\nAlthough EvoMAC performs well with large models like GPT-4o-Mini, its performance with smaller or less capable models is unclear. This reliance may restrict its applicability, particularly in environments with limited computational resources. Can EvoMAC work effectively with models of different sizes, or does it rely on the power of high-capacity LLMs? Would it perform satisfactorily with smaller models that might be more efficient in constrained environments?\\n\\n**Answer:** \\nThank you for your time and valuable suggestions. To address the reviewer's concerns regarding its applicability in constrained scenarios, we equip EvoMAC with smaller models, including Qwen2.5-Coder-7B-Instruct and Qwen2.5-Coder-14B-Instruct, and the performance results are shown in the table below. \\n\\n1. EvoMAC performs effectively with models of varying sizes, ranging from smaller models like 7B and 14B to GPT-4o-mini. Regardless of the model used, EvoMAC consistently outperforms single-agent systems, highlighting its generalizability and effectiveness. \\n\\n2. EvoMAC's effectiveness does not demand very high-capacity LLMs. For instance, the 14B model achieves performance gains of 28.36% and 29.94% on Website Basic and Advanced tasks, respectively, comparable to GPT-4o-mini's gains of 26.48% and 20.26%. However, the LLM capacity must not be too limited; the 7B model with performance 24.32% and 7.57% shows more modest improvements. \\n\\n| Method | Model | Web-Basic | Web-Advanced |\\n| ------ | -------------------------- | -------------- | -------------- |\\n| Single | Qwen2.5-Coder-7B-Instruct | 24.32 | 7.57 |\\n| Single | Qwen2.5-Coder-14B-Instruct | 43.25 | 20.65 |\\n| Single | GPT-4o-Mini | 62.90 | 44.40 |\\n| EvoMAC | Qwen2.5-Coder-7B-Instruct | 27.74 (+3.42) | 7.69 (+0.12) |\\n| EvoMAC | Qwen2.5-Coder-14B-Instruct | 71.58 (+28.26) | 50.61 (+29.94) |\\n| EvoMAC | GPT-4o-Mini | 89.38 (+26.48) | 65.05 (+20.65) |\\n\\n---\"}", "{\"title\": \"From AC.\", \"comment\": \"Reviewer 32VP: if possible, can you reply to the rebuttal?\"}", "{\"title\": \"From AC.\", \"comment\": \"Reviewer kPYh: if possible, can you reply to the rebuttal?\"}", "{\"title\": \"Response to Reviewer 32VP\", \"comment\": \"[**Question 4**]\\nGiven EvoMAC\\u2019s iterative approach, how would it handle larger software projects with thousands of lines of code and extensive requirements? Are there specific design considerations for scaling it to more extensive projects?\\n\\n**Answer:** \\nEvoMAC\\u2019s multi-agent setup enables automatic scaling for large-scale project development. Experiments in the paper show that EvoMAC effectively expands from function-level code with tens of lines to software-level code with hundreds of lines, with promising potential to further scale to thousands of lines. This scalability is achieved through two key design features:\\n\\n1. The coding organizer can automatically break down extensive requirements into smaller, manageable sub-tasks and organize a larger team of coding agents to collaboratively develop large-scale software projects. Similarly, it can form a larger testing team to conduct comprehensive testing for these projects.\\n\\n2. The updating team in EvoMAC continuously adds new agents to handle unfinished requirements, ensuring the completeness of large-scale code generation. Additionally, it removes agents whose tasks are complete, improving development efficiency by avoiding redundant efforts.\"}", "{\"title\": \"Response to Reviewer aNRL\", \"comment\": \"[**Weakness 3**]\\nGiven that EvoMAC includes multiple evolutionary iterations, direct comparisons with standard multi-agent frameworks may not be entirely fair. Could you also provide the number of LLM calls for tasks in RSD-Bench? This metric would offer a more clearer understanding of EvoMAC\\u2019s performance.\\n\\n**Answer:** \\nWe compare the average API calls between EvoMAC, ChatDev, and ChatDev* in the table below. ChatDev is the best multi-agent baseline on RSD-Bench in Table 1. ChatDev* is the varaint of ChatDev with larger api call budget. We see that 1) simply increasing the number of api call times can not bring a performance improvement; 2) although our EvoMAC requires relatively more api call times, it leads to significant improvement; 3) for simpler tasks like HumanEval, EvoMAC can organize the team more efficiently and thus 'gets twice the result with half effort'.\\n| Method | Web-Basic | Web-Advanced | Web (APICall) | Game-Basic | Game-Advanced | Game (APICall) | HumanEval | HumanEval (APICall) |\\n| -------- | --------- | ------------ | ------------- | ---------- | ------------- | -------------- | --------- | ------------------- |\\n| ChatDev | 62.67 | 43.45 | 7.09 | 53.63 | 32.26 | 14.00 | 70.73 | 12.55 |\\n| ChatDev* | 55.13 | 31.17 | 61.11 | 45.65 | 35.48 | 36.63 | / | / |\\n| EvoMAC | 89.38 | 65.05 | 47.87 | 77.54 | 51.60 | 57.13 | 94.51 | 8.01 |\\n\\n---\\n\\n\\n[**Weakness 4**]\\nEvoMAC primarily focuses on models like GPT-4, Claude 3.5, and Gemini, but it is unclear if the framework can adapt to less powerful models, such as GPT-3.5 or open-source options like DeepSeek. Presenting results across a broader range of LLMs would support EvoMAC\\u2019s claims of robustness and adaptability.\\n\\n**Answer:** \\nThanks for the suggestions. We equipped EvoMAC with smaller models, including Qwen2.5-Coder-7B-Instruct and Qwen2.5-Coder-14B-Instruct, and the performance results are shown in the table below. \\n\\n1. EvoMAC performs effectively with models of varying sizes, ranging from smaller models like 7B and 14B to GPT-4o-mini. Regardless of the model used, EvoMAC consistently outperforms single-agent systems, highlighting its robustness and adaptability. \\n\\n2. EvoMAC's effectiveness does not demand very high-capacity LLMs. For instance, the 14B model achieves performance gains of 28.36% and 29.94% on Website Basic and Advanced tasks, respectively, comparable to GPT-4o-mini's gains of 26.48% and 20.26%. \\n\\n\\n| Method | Model | Web-Basic | Web-Advanced |\\n| ------ | -------------------------- | -------------- | -------------- |\\n| Single | Qwen2.5-Coder-7B-Instruct | 24.32 | 7.57 |\\n| Single | Qwen2.5-Coder-14B-Instruct | 43.25 | 20.65 |\\n| Single | GPT-4o-Mini | 62.90 | 44.40 |\\n| EvoMAC | Qwen2.5-Coder-7B-Instruct | 27.74 (+3.42) | 7.69 (+0.12) |\\n| EvoMAC | Qwen2.5-Coder-14B-Instruct | 71.58 (+28.26) | 50.61 (+29.94) |\\n| EvoMAC | GPT-4o-Mini | 89.38 (+26.48) | 65.05 (+20.65) |\\n\\n---\\n\\n[**Weakness 5**] Can the authors provide additional examples of the unit tests designed within RSD-Bench?\\n\\n**Answer:** \\nYes! We show the test cases used for evaluation and EvoMAC's generated test cases at Tab 11 and 14 in the appendix, including task prompt(Requirement), subtask decomposed by Test Organizer, evaluation metric test case, and generated test case.\"}" ] }
4QWPCTLq20
IntelLLM: Little Hints Make a Big Difference for LLM KV Cache Compression
[ "TingLong Li", "Qiuyu Shao" ]
Large Language Models (LLMs) have demonstrated exceptional capabilities in integrating contextual knowledge, but their deployment is often constrained by the substantial computational resources required for long text sequences. To mitigate the inference time cost associated with attention mechanisms, LLMs utilize key-value embedding caching techniques (KV cache), which introduce significant storage pressure. In this paper, we propose IntelLLM, a novel and efficient approach to KV cache compression that strikes a balance between compression rate and performance. Drawing inspiration from sparse attention mechanism, we observe that only a small subset of tokens in lengthy texts capture the majority of attention weights. This sparsity, intrinsic to the attention mechanism, serves as the foundation for improving the KV compression ratio through a strategic eviction method. IntelLLM is composed of center of gravity eviction (CGE) strategy and remote gap localization (RGL) strategy. CGE is designed to address the potential loss of important semantic dependencies when evicting high-sparsity tokens, which prioritizes the retention of key tokens by shielding the center of gravity of attention during inference, thereby preserving critical information and optimizing the efficiency of attention computation. Additionally, RGL is proposed to leverage implicit positional features to maintain long-range dependencies, inspired by advancements in location encoding research. Our KV compression approach integrates seamlessly with existing LLMs, requiring minimal code modifications without the need for fine-tuning or model parameter changes. IntelLLM not only significantly reduces the storage requirements for KV cache but also consistently outperforms full KV models in long text processing tasks, while utilizing only 50% of the typical KV cache expenses.
[ "LLM", "KV cache compression", "CGE", "RGL" ]
https://openreview.net/pdf?id=4QWPCTLq20
https://openreview.net/forum?id=4QWPCTLq20
ICLR.cc/2025/Conference
2025
{ "note_id": [ "fDcZClXjhI", "Vxv6XKHR8E", "ABYExszRWE", "2txA7ihfEh", "2sRFVbi9Sl", "2EmLtpS58W" ], "note_type": [ "official_review", "official_review", "official_review", "official_review", "official_review", "comment" ], "note_created": [ 1730674623943, 1729074040738, 1730562815067, 1730689391070, 1730588333769, 1732608625971 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission9602/Reviewer_TdSu" ], [ "ICLR.cc/2025/Conference/Submission9602/Reviewer_4hs9" ], [ "ICLR.cc/2025/Conference/Submission9602/Reviewer_4QgB" ], [ "ICLR.cc/2025/Conference/Submission9602/Reviewer_hxLH" ], [ "ICLR.cc/2025/Conference/Submission9602/Reviewer_BUQq" ], [ "ICLR.cc/2025/Conference/Submission9602/Authors" ] ], "structured_content_str": [ "{\"summary\": \"The paper introduces IntelLLM, a framework that aims to optimize the key-value (KV) cache compression for large language models (LLMs) without compromising performance. It addresses the challenge of high memory consumption during long-sequence inference by using two innovative techniques: Center of Gravity Eviction (CGE) and Remote Gap Localization (RGL). CGE prioritizes important tokens in attention mechanisms to ensure efficient memory use, while RGL preserves essential long-range dependencies using positional features. These strategies enable significant memory savings, reducing KV cache usage by 50%, with only a minimal impact on inference latency. The authors demonstrate IntelLLM's effectiveness through comprehensive experiments, achieving performance comparable to or better than full KV models.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The combination of CGE and RGL provides a novel solution to the KV cache memory challenge, enhancing memory efficiency without fine-tuning or substantial performance loss. The paper presents a strong theoretical basis for its methods, including insights into the sparsity of attention weights and the impact of key tokens, strengthening the validity of the proposed strategies. IntelLLM is easy to integrate into existing LLM frameworks, as it requires minimal modifications, making it highly practical for real-world deployment, especially in resource-constrained environments. The extensive benchmarking on LongBench with models like Llama-3-8B-Instruct and Mistral-7B demonstrates IntelLLM's efficiency and adaptability across diverse tasks, validating the approach. Achieving 50% KV cache reduction with a negligible increase in latency is a noteworthy achievement, making IntelLLM suitable for long-text inference tasks in various settings.\", \"weaknesses\": \"1. Sparse attention is already well explored in several previous works as [1] [2]. This will weaken the novelty of this work. H2O [3] has already well-explored the feedback of using sliding window.\\n2. Lack of baselines (i.e., H2O [3], SnapKV [4], PyramidKV [5])\\n3. Evaluation of Needle in a Haystack is required to help illustrate your motivation of maintaining long-range dependencies\\n4. GCE is pretty close to previous methods like H2O [3] and SnapKV [4]. I can only see limited novelty over this method.\\n\\n[1] https://arxiv.org/abs/2402.17762\\n[2] https://arxiv.org/pdf/2309.17453\\n[3] https://arxiv.org/abs/2306.14048\\n[4] https://arxiv.org/abs/2404.14469\\n[5] https://arxiv.org/abs/2406.02069\", \"questions\": \"As weakness\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper presents IntelLLM, a novel KV cache eviction algorithm designed to alleviate the storage and computational burden of Transformer-based large language model inference. IntelLLM leverages the sparsity of attention mechanisms and strategically evicts certain tokens to improve inference efficiency. The authors propose two key strategies: Center of Gravity Eviction (CGE) and Remote Gap Localization (LGL). CGE addresses the semantic loss caused by dominant attention scores in softmax. LGL reorganizes token positions by creating a large gap between global and local tokens, further enhancing processing efficiency. IntelLLM is evaluated on LongBench with two models, Llama-3-8B and Mixtral-7B-instruct-v0.2, and outperformed all the baselines, including Full KV cache, StreamingLLM, and LM-Infinite. The algorithm achieved 2x KV cache compression ratio for Llama and an 8x compression ratio for Mixtral, maintaining strong inference efficiency despite the reduced cache size. Additionally, the paper includes ablation studies focusing on Head Gravity and the RGL Gap, demonstrating the soundness and effectiveness of IntelLLM's design.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. This paper provides a clear explanation of why the attention mechanism tends to allocate excessive weights to a few tokens with high attention scores while neglecting others. The analysis of the softmax function is insightful and well-reasoned.\\n\\n2. The proposed method, IntelLLM, demonstrates strong performance in benchmarks and shows compatibility across multiple models, including widely-used ones like Llama and Mixtral.\\n\\n3. The method is simple and easy to implement. IntelLLM can be integrated into various scenarios and inference frameworks.\", \"weaknesses\": \"1. The presentation is to some extent unclear and confusing. Some notations and terminologies are used without a clear definition. The description of Algorithm 1 is confusing and hard to understand. Tables and figures and be organized in a better format. Details of presentation issues are described in question Q1.\\n\\n2. Insufficient baseline selection. The proposed algorithm of this paper is an eviction algorithm that compresses KV cache, thus there should be comparison between other popular eviction methods, including H2O [1], SnapKV [2], SirLLM [3] and IntervalLLM (a baseline created in SirLLM). It would be better to compare IntelLLM with other end-to-end methods, including InfLLM [4] and MInference [5], as they utilize attention sparsity. (This is only a suggestion and the authors are not required to test all the methods above during the review process, but futher discussion is expected.)\\n\\n3. Insufficient benchmark datasets. Although LongBench is a classic benchmark for long context inference, the average length is relatively short and the hardness is limited. It is expected to benchmark IntelLLM on harder benchmark, e.g. L-Eval [6], and longer benchmark, e.g. RULER [7] or InfiniteBench [8]. It would also be better to test IntelLLM on accurate context retrieval tasks, e.g. Ret.PassKey in InfiniteBench and RULER. (This is also a suggestion, and the authors are not required to test all of the benchmarks mentioned above. However, some simple supplementary benchmarks are welcomed.)\\n\\n4. Lack of ablation studies. Ablations on $L_{comp}$ and $L_{near}$ are expected, as they can be used to prove which part of the retained KV cache is more important and to what extent can the hyperparameters affect the overall performance. Also, there should be comparison between RGL and other methods of assigning position information, such as assigning continuous position ids and not assigning any position information for distant tokens.\\n\\n5. Unclear presentation of research intention. KV cache compression is a technique developed to enhance model generation speed or reduce memory consumption. The paper should clearly state out which purposed is focused mainly, and conduct the corresponding experiments. For example, the model generation speed should be tested and it is expected to be faster than full KV cache inference. The peak memory is also expected to be lower if the system implementation is delicate enough. The speed reported in line 445-446 cannot state the research intention as the pre-fill speed is slightly lower. Reporting a faster generation speed (the speed only tested on the decoding stage and excluding the pre-fill stage) might be helpful.\\n\\n6. No limitation and future work discussion. This paper should include some vital limitations and potential future work of the proposed methods.\\n\\n\\n[1] H2O: Heavy-Hitter Oracle for Efficient Generative Inference of Large Language Models\\n\\n[2] SnapKV: LLM knows what you are looking for before generation\\n\\n[3] SirLLM: Streaming Infinite Retentive LLM\\n\\n[4] InfLLM: Unveiling the intrinsic capacity of LLMs for understanding extremely long sequences with training-free memory\\n\\n[5] Minference 1.0: Accelerating pre-filling for long-context llms via dynamic sparse attention\\n\\n[6] L-eval: Instituting standardized evaluation for long context language models\\n\\n[7] RULER: What\\u2019s the Real Context Size of Your Long-Context Language Models?\\n\\n[8] \\u221e Bench: Extending Long Context Evaluation Beyond 100K Tokens\", \"questions\": \"1. Suggestion on improving the overall presentation. First, there are some notation used without definition, e.g. $n, m, l_{head}, l_{tail}$ in Algorithm 1, and the operation $[:]$ (which dimension is applied to?) used in Algorithm 1. Second, terms such as \\\"Center of Gravity\\\" should be defined formally in the KV cache eviction topic. Third, there remains some minor grammar and expression mistakes in this paper, such as the misuse of citation format. Forth, the presentation of the tables and figures should be improved. The average score should be reported in Table 1 & 2. The meaning of the x-axis and y-axis of Figure 1 & 2 should be clarified.\\n\\n2. The description of Algorithm 1 is confusing. Could you provide a very detailed explanation of the calculation process in natural language? \\n\\n3. There isn't any model named \\\"Llama3-7B\\\". Do you mean Llama-3-8B?\\n\\n4. What are the detailed settings of the baselines, e.g. how much KV cache is retained? Could you provide more information on the hyperparameters used on baselines?\\n\\n5. The experimental results mentioned in line 316-319 should be included in this paper if such conclusion is drawn. If there is no space, it should be placed in the Appendix. The experiments in line 359-361 should also be included.\\n\\nOverall, I really like the idea proposed by this paper, which I personally find very inspiring. I will raise my rating on soundness and overall rating if good discussion is made with necessary experimental results during the rebuttal phase.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper presents IntelLLM, a method for compressing the key-value (KV) cache in Large Language Models (LLMs) to address memory constraints in long-sequence processing. Drawing on the sparsity of attention mechanisms, IntelLLM focuses on retaining only essential tokens, significantly reducing the KV cache size without compromising model performance. The proposed approach combines two strategies: center of gravity eviction (CGE), which prioritizes important tokens to preserve key semantic information, and remote gap localization (RGL), which maintains long-range dependencies using positional features. IntelLLM can integrate smoothly with existing LLMs, requiring minimal modifications and no fine-tuning. Experimental evaluations show IntelLLM achieves performance close to StreamLLM while halving KV cache requirements.\", \"soundness\": \"1\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"The problem investigated in the paper, KV cache compression, is crucial for long-context generation.\", \"weaknesses\": \"a.\\tThe paper lacks substantial references from the past year, missing important studies like SnapKV[1], FastGen[2], H2O[3], PyramidKV[4] and etc.\\nb.\\tThe observation that only a limited subset of tokens is critical for long-context generation has been extensively discussed in these and other recent works, which should be cited to provide a more comprehensive background.\\nc.\\tThe experimental baseline used in the study is relatively weak; including stronger baselines from the above-mentioned works would enhance the robustness of the comparative analysis and strengthen the validity of the results.\\nd.\\tIn Section 3, the two presented \\\"theorems\\\" are more accurately findings, as no formal proofs are provided to substantiate these claims.\\n\\n[1] SnapKV: LLM Knows What You are Looking for Before Generation\\n[2] Model Tells You What to Discard: Adaptive KV Cache Compression for LLMs\\n[3] H2O: Heavy-Hitter Oracle for Efficient Generative Inference of Large Language Models\\n[4] PyramidKV: Dynamic KV Cache Compression based on Pyramidal Information Funneling\", \"questions\": \"There is no reference to table 2, why the experiments are different on Mistral and Llama?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper develops a KV-cache compression technique called IntelLLM to optimize inference in LLM on long text tasks. IntelLLM consists of two cache eviction strategies: center of gravity eviction (CGE) and remote gap localization (RGL). CGE mitigates the domain semantic imbalance by redirecting attention away from the center of gravity attention (cluster of important KVs). RGL solves the issue of time span vanishing caused by cache compression by assigning cache position values to distant KVs. Empirical evaluation shows IntelLLM saves KV cache memory by 50% with similar performance on long text tasks compared to baseline.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The paper addresses an important research problem, KV cache optimization for LLM inference, and proposes two interesting techniques: center of gravity eviction (CGE) and remote gap localization (RGL).\", \"Empirical results are competitive with prior work and baseline models.\"], \"weaknesses\": \"- Using sparsity in attention to compression KV cache is not new. Two ICLR 2024 papers: StreamingLLM (https://openreview.net/forum?id=NG7sS51zVF) and FastGen (https://openreview.net/forum?id=uNrFpDPMyo) both observe the attention patterns and use it to compress KV cache.\\n\\n- Missing important work in both related work and baseline comparison. The paper did compare with StreamingLLM, but does not discuss it in related work. In fact, the paper misses many important KV cache prior works:\\n(1) Model Tells You What to Discard: Adaptive KV Cache Compression for LLMs, ICLR 2024, https://openreview.net/forum?id=uNrFpDPMyo\\n(2) SnapKV: LLM Knows What You are Looking for Before Generation, https://arxiv.org/abs/2404.14469\\n(3) XC-Cache: Cross-Attending to Cached Context for Efficient LLM Inference, https://arxiv.org/abs/2404.15420\\n(4) Layer-Condensed KV Cache for Efficient Inference of Large Language Models, https://arxiv.org/abs/2405.10637 \\n(5) PyramidInfer: Pyramid KV Cache Compression for High-throughput LLM Inference, https://arxiv.org/abs/2405.12532\\n(6) PyramidKV: Dynamic KV Cache Compression based on Pyramidal Information Funneling, https://arxiv.org/abs/2406.02069 \\n\\n\\n- Many writing sections are unclear. \\n(1) The introduction (line 55) says prior work has two limitations, does IntelLLM have these two limitations too? What is the conflict between increased computational overhead and memory optimization? Line 83 says significant, how much memory is saved? Any speed gains? \\n(2) The analysis in Section 3.1 is weak without any evidence or citations, the two conclusions are not convincing either. For example, line 172 says they compromise the robustness of the attention distribution. What is the robustness of attention in the first place? And why would the covariates compromise this? Line 180 says sliding window fails to reason effectively about long texts, any evidence or citation? Line 183 says they contribute to the collapse of the LLM, again, no evidence or justification. \\n(3) The two theorems in sections 3.2 and 3.3 are not theorems, and the paper provides no proof.\\n\\n- Evaluation is weak and flawed. Table 1 (line 399) presents the results of IntelLLM on longbench. But unclear why the window size is 4K for IntelLLM and 8K for others. There is no side-by-side efficiency comparison either. Line 443 says the latency increased by 2.63% but the memory saves 50%, is it the case that IntelLLM always saves 50% memory? There is no ablation study on that.\", \"questions\": \"See weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper aims to further the Pareto frontier of KV-cache compression rate and performance. By employing strategic eviction strategies, the method leverages the observation that only a small subset of tokens in long texts capture the majority of attention weights.\", \"soundness\": \"1\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": \"**Significance:** The paper claims 50% KV cache compression without a significant drop in performance, outperforming other KV cache compression methods in a majority of datasets in LongBench and outperforming full KV cache in some datasets. The method does not require fine-tuning, making it easy to apply to existing LLMs.\", \"weaknesses\": \"1. There does not seem to be a rigorous proof or set of empirical observations to substantiate the theorems proposed in sections 3.2 and 3.3.\\n2. There is no discussion on the method used to choose \\\"k\\\" \\u2013 the number of top keys to be treated as \\\"centers of gravity\\\".\\n3. Implementation details are not provided, making it hard to reproduce the results.\\n4. There does not seem to be an ablation study to evaluate how the method performs when only either CGE or RGL is used.\\n5. There is no discussion on specific deployment environments where a 50% memory saving will enable new use cases, or how this method can be combined with other methods to further increase memory savings.\\n6. The experiments only provide two KV cache compression methods as baselines, leaving out other KV cache compression methods that do not require fine-tuning, such as static prefix caching, paged attention, or radix attention. Additionally, the experiments do not compare with other approaches that do not involve KV cache compression.\\n7. Best performing methods are not clearly marked in the experiment result tables.\\n8. The explanations and visualizations of CGE and RGL are unclear.\", \"questions\": \"1. What is the rationale behind choosing StreamingLLM and InfiniteLLM as KV cache compression baselines?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}" ] }
4QVgnxXVDB
3CIL: Causality-Inspired Contrastive Conditional Imitation Learning for Autonomous Driving
[ "Huanghui Zhang", "Zhi Zheng", "Xiaomin Lin" ]
Imitation learning (IL) aims to recover an expert's strategy by performing supervised learning on the demonstration datasets. Incorporating IL in safety-crucial tasks like autonomous driving is promising as it requires less interaction with the actual environment than reinforcement learning approaches. However, the robustness of IL methods is often questioned, as phenomena like causal confusion occur frequently and hinder it from practical use. In this paper, we conduct causal reasoning to investigate the crucial requirements for the ideal imitation generalization performance. With insights derived from modeled causalities, we propose causality-inspired contrastive conditional imitation learning (3CIL), a conditional imitation learning method equipped with contrastive learning and action residual prediction tasks, regularizing the imitator in causal and anti-causal directions. To mitigate the divergence with experts in unfamiliar scenarios, 3CIL introduces a sample-weighting term that transforms the prediction error into an emphasis on critical samples. Extensive experiments in the CARLA simulator show the proposed method significantly improves the driving capabilities of models.
[ "Imitation Learning", "Autonomous Driving", "Causal Reasoning", "Causal Confusion" ]
Reject
https://openreview.net/pdf?id=4QVgnxXVDB
https://openreview.net/forum?id=4QVgnxXVDB
ICLR.cc/2025/Conference
2025
{ "note_id": [ "ztG1VXqyto", "xlnooxRXh6", "x1nrE4fCV9", "wcNaUrH7pH", "wFsQvkfZzD", "tXkuRafLAB", "tK59vbdp5P", "rGscHnAuRJ", "bYVRf4g70D", "YOtP7mrGeW", "SpzGzT1AOi", "Rb6X0CgLAx", "RJexPeQZ54", "Qq6Jr5iZ9H", "Q7k2ucMyax", "OgV4nso1hj", "HOwQVYN3ty", "CxEgzOrQHz", "A5bitKkyzf", "0g48bCRdpg" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1733060806860, 1732504738756, 1732504011184, 1730943675086, 1732698340233, 1731342232717, 1733156463367, 1737523964281, 1732504205934, 1732503936973, 1732504360571, 1730564383742, 1730411507971, 1732504914166, 1734769184287, 1733110079043, 1732504421911, 1732684776717, 1732504997487, 1732698426633 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission9150/Authors" ], [ "ICLR.cc/2025/Conference/Submission9150/Authors" ], [ "ICLR.cc/2025/Conference/Submission9150/Authors" ], [ "ICLR.cc/2025/Conference/Submission9150/Reviewer_USMw" ], [ "ICLR.cc/2025/Conference/Submission9150/Authors" ], [ "ICLR.cc/2025/Conference/Submission9150/Reviewer_31eB" ], [ "ICLR.cc/2025/Conference/Submission9150/Reviewer_MYqZ" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission9150/Authors" ], [ "ICLR.cc/2025/Conference/Submission9150/Authors" ], [ "ICLR.cc/2025/Conference/Submission9150/Authors" ], [ "ICLR.cc/2025/Conference/Submission9150/Reviewer_MYqZ" ], [ "ICLR.cc/2025/Conference/Submission9150/Reviewer_9A5c" ], [ "ICLR.cc/2025/Conference/Submission9150/Authors" ], [ "ICLR.cc/2025/Conference/Submission9150/Area_Chair_u8Aj" ], [ "ICLR.cc/2025/Conference/Submission9150/Reviewer_USMw" ], [ "ICLR.cc/2025/Conference/Submission9150/Authors" ], [ "ICLR.cc/2025/Conference/Submission9150/Reviewer_9A5c" ], [ "ICLR.cc/2025/Conference/Submission9150/Authors" ], [ "ICLR.cc/2025/Conference/Submission9150/Authors" ] ], "structured_content_str": [ "{\"title\": \"Sorry to bother you\", \"comment\": \"Dear reviewer 31eB,\\n\\nWe really appreciate your time and efforts engaged in the review phase. As the discussion period is coming to a close, we wanted to check back to see whether you have any remaining questions or concerns. We would be happy to clarify further, and grateful for any other feedback you may provide. \\n\\nThank you very much and look forward to your replies! Happy Thanksgiving Day\\uff01\\n\\nBest regards, \\nPaper authors\"}", "{\"title\": \"Response to Reviewer 9A5c 1/3\", \"comment\": \"We thank the reviewer for the constructive comments and positive feedback on our paper. Regarding the concerns of the Reviewer 9A5c, we provide the following response.\\n>**Q1.1 What the \\\"C\\\" in CIL stands for.**\\n\\nSorry for this confusion. The \\\"CIL\\\" stands for \\\"Conditional Imitation Learning\\\", and the term \\\"3CIL\\\" represents **C**ausality-Inspired **C**ontrastive **C**onditional **I**mitation **L**earning. We have corrected such abbreviation confusion in the revised manuscript.\\n\\n>**Q1.2. Does the input for the VAE include all the past history observations $o\\\\_{1:t}$ ?**\\n\\nSorry for this confusion. The input for the VAE contains only history observation within the time window $l$, as $o\\\\_{t-l:t}$, $l$ is set to $5$ in our experiment. We have added Table 2 to introduce the structures of data and components in our approach, in Appendix A.2.2.\\n\\n>**Q1.3. Some important details are in the appendix.**\\n\\nThanks for the valuable feedback. Due to the page limit of the conference, we have to move some important parts of our paper to the appendix. We have added refer links in the main context to help locate corresponding contents, and we will keep working on improving the presentation of our paper.\\n\\n>**Q2.1. Remedies for unobserved confounders(UCs).**\\n\\nWe agree with the reviewer that UCs played vital roles in causal inference and imitation learning, as they hindered the identification of causal effects, and introduced spurious correlations in imitators' decision process.\\nHowever, different from previous approaches that provide feasible procedures for adversarial imitation learning (AIL)[1], inverse reinforcement learning (IRL) [2], and graphical criterion [3,4] for deriving imitators that match or outperforms the expert's performance, our work majorly focuses on assisting the complex vision imitation learning task with the setting of behavior cloning (BC).\\nTherefore, our work is more similar to the settings of [5,6], where the high-dimensional observation prevents the imitator from utilizing specific variables to achieve provably de-confounding. However, the imitator may benefit from the incorporation of additional supervision signals or features that exhibit correlations to UCs and observations, so that the backdoor paths can be blocked by controlling them. For example, adding neighbors' actions as instrumental variables to control the confounding, as they affect $a\\\\_t$ through $s\\\\_t$, and have a strong correlation with $s\\\\_t$.\\n\\n>**Q2.2. Active backdoor path $a\\\\_{t-1}\\\\gets s\\\\_{t-1}\\\\to s\\\\_t\\\\to a\\\\_t$ never be blocked.**\\n\\nWe appreciate the reviewer\\u2019s insightful questions. As pointed out, the unobservability of the ground truth states $s\\\\_{1:t}$ introduces inevitable confounding factors in the imitator's decision-making process. Without the introduction of additional features, more interaction chances with the environment during the training phase, or further domain knowledge, the active backdoor path $a\\\\_{t-1}\\\\gets s\\\\_{t-1}\\\\to s\\\\_t\\\\to a\\\\_t$ cannot be blocked in this setting. As a result, providing guarantees for identifiability or robustness is not feasible within the scope of our current framework.\\nRegarding convergence, while we do not offer formal guarantees under these circumstances, we have incorporated supervised contrastive learning and a sample-weighting strategy to assist the learning process and mitigate potential errors arising from the representation learning stage. These approaches help ensure more robust performance in practice, although formal convergence analysis remains an open direction for future work.\"}", "{\"title\": \"Response to Reviewer 31eB 2/2\", \"comment\": \">**W2. Figure 2 could have more annotations: It would be better if the authors could annotate the different colors and shapes of each node.**\\n\\nThank you for your valuable suggestion. In the updated manuscript, we have added annotations to Figure 2.\\n\\n>**Questions about untested assumptions, more ablation experiments, more qualitative examples, details in comparing baselines, and performance gain.**\\n\\nWe really appreciate the reviewer for the constructive comments. We have revised our paper based on your advice. We list the revisions as follows. \\n(1) We conduct experiments to test the assumption about the benefit of observation history and present them in Appendix A.4.1. \\n(2) We present more ablation experiments to study the effectiveness of major components in our design framework and present them in Appendix A.4.2 and A.4.3. \\n(3) We add Figure 7 in Appendix A.4.2 to provide an example of the process of our proposed framework. \\n(4) We introduce the baselines we used for comparison and describe their corresponding implementing process, and present them in Appendix A.3.1. \\nThank you again for your valuable feedback.\\n\\n[1]de Haan, Pim, Dinesh Jayaraman, and Sergey Levine. \\\"Causal confusion in imitation learning.\\\" *Proceedings of the 33rd International Conference on Neural Information Processing Systems.* 2019. \\n[2]Chuang, Chia-Chi, et al. \\\"Resolving copycat problems in visual imitation learning via residual action prediction.\\\" *European Conference on Computer Vision.* 2022. \\n[3]Seo, Seokin, et al. \\\"Regularized behavior cloning for blocking the leakage of past action information.\\\" *Proceedings of the 37th International Conference on Neural Information Processing Systems.* 2023. \\n[4]Nastl, Vivian Yvonne, and Moritz Hardt. \\\"Do causal predictors generalize better to new domains?.\\\" *Proceedings of the 38th International Conference on Neural Information Processing Systems.* 2024. \\n[5]Chen, Yang, Yitao Liang, and Zhouchen Lin. \\\"DIGIC: Domain Generalizable Imitation Learning by Causal Discovery.\\\" *arXiv preprint arXiv:2402.18910.* 2024.\"}", "{\"summary\": \"This paper proposes to solve causal confusion problem in imitation learning using supervised contrastive learning, residual prediction and sample weighting. It draws insights from causality that motivates to learn a representation of history observations without spurious correlations.\\nExperiments on CARLA shows solid improvements over baselines, such as CIL and Premier-TACO.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"Formulate the imitation learning problem from a causal perspective and tries to prevent confounding factors using representation learning.\", \"Proposes to use supervised contrastive learning to learn an image representation that aligns with expert actions.\", \"The improvements in the tested scenarios is promising.\"], \"weaknesses\": [\"Lack of comparisons with some related work, such as Wen et.al. Key-frame focused visual imitation learning, which proposes a weighting strategy based on action predictability.\", \"It would be nice to have more quantitative and qualitative analysis of the improvement. Can we attribute those to improvement in reducing spurious correlations?\", \"Lack of evaluation on CARLA benchmark instead of self-constructed scenarios.\"], \"questions\": \"The improvements in those scenarios look promising. But, how to demonstrate it's coming from reducing the confounding factors as shown in previous sections?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"General Response\", \"comment\": \"We thank all the reviewers for their time and efforts in the review phase and the constructive comments.\\n**We have revised our paper (highlighted in blue text color). The modifications are summarized as follows.** \\n1. (For Reviewer 31eB, USMw, MYqZ). We present more quantitative analysis and experimental results, including the performance of models under severe causal confusion (Appendix.A.4.1), the effect of replacing the propose sample-weighting strategy (Appendix.A.4.2), ablation stduies on the effect of major componenets in our method (Appendix.A.4.3). \\n2. (For Reviewer USMw, MYqZ). We add the comparison with Keyframe[1], including methods' performance (Table 1 in Section 4.2), and investigation of the effect of different weighting functions (Table 6 in Appendix.A.4.2). \\n3. (For Reviewer 31eB, MYqZ, 9A5c). We improve the presentation of experiments, including introducing the structure of data and major components of our method (Table 2 in Appendix A.2.2), the baselines used in our experiments and the implementation details (Appendix A.3.1), our experimental platforms (Appendix A.3.2), and the definition of reward function (Appendix A.3.5). \\n4. (For Reviewer 31eB, USMw, MYqZ, 9A5c). We provide more visualizations, including Figure 6 in Appendix A.4.1 to illustrate circumstances with severe spurious correlations, and Figure 7 in Appendix A.4.2 to illustrate the process of sample-weighting in our method.\\n\\nWe appreciate the efforts of reviewers in the review and rebuttal phase, and their the valuable questions and comments, which have materially improved our paper presentation.\\n\\n[1]Wen, Chuan, et al. \\\"Keyframe-focused visual imitation learning.\\\" *arXiv preprint arXiv:2106.06452.* 2021.\"}", "{\"summary\": \"The authors present a new imitation learning algorithm that aims to solve some causal confusion problems in previous imitation learning methods on self-driving tasks. Specifically, the authors propose to 1) learn a more representative state representation; 2) reduce the chance of learning spurious-correlation by inferring delta actions from latent states, and 3) weight training samples by the discrepancy between prediction and ground truth.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The authors do a good job summarizing previous findings about causal confusion problems in self-driving tasks\", \"Proposed lots of interesting strategies to potentially solve or alleviate the causal confusion problems\"], \"weaknesses\": [\"Authors made lots of assumptions:\", \"It's not sufficient to directly map ot to at\", \"This remains an untested hypothesis\", \"learning a decoder for \\\\hat{s}t helps it match with expert st\", \"on the contrary, learning a decoder for \\\\hat{s}t could force the encoder to focus on every detail in the image, even the ones that do not directly contribute to ground st.\", \"Since delta(at) is inferred from (st), it doesn't learn the spurious correlation\", \"This assumption can be wrong since st would contain information from a(t-1)\", \"The proposed method is better than baselines in most scenarios, but is that because of the design choices or just better models or bigger capacities?\", \"Figure 2 could have more annotations\", \"It would be better if the authors could annotate the different colors and shapes of each node\"], \"questions\": \"My main concern is that there are lots of assumptions made in this paper that are unsupported by evidence or experiments, I would change my opinion if the authors present more ablation experiments that carefully study each of their design decisions. I would also love to see more qualitative examples (instead of just descriptions). Lastly, the authors should give more details when comparing the baselines. Are the performance gain simply caused by a better network architecture or a bigger network capacity?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"NA\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thanks for your response and the new results during the rebuttal period. Most of my concerns have been addressed. I will keep my score.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Response to Reviewer USMw\", \"comment\": \"We thank the reviewer for the constructive comments and positive feedback on our paper. Regarding the concerns of the Reviewer USMw, we provide the following response.\\n\\n>**W1. Lack of comparisons with some related work, such as Key-frame focused visual imitation learning.** \\n\\nThank you for this valuable suggestion. We have introduced Keyframe[1] and implemented it in our experiments, with discussions about its performance in Section 4 and Appendix A.3.1. \\nWe also perform a study by replacing our sample-weighting strategy with two functions in [1], a brief result is listed below, and a detailed discussion about weighting strategies is provided in Appendix A.4.2. \\n| | | step[1] | softmax[1] | Ours |\\n|------------|----------------|----------|-------------|------|\\n| Scenario 1 | Collision Rate | 0.59 | 0.57 | **0.54** |\\n| Scenario 5 | Collision Rate | 0.52 | 0.54 | **0.48** |\\n| Scenario 6 | Collision Rate | 0.68 | 0.64 | **0.59** |\\n\\n>**W2. More quantitative and qualitative analysis of the improvement. Can we attribute those to improvement in reducing spurious correlations?**\\n\\nWe appreciate the reviewer's constructive suggestions. We have added more quantitative and qualitative analysis in Appendix A.4. We also conduct an experiment under the counterfactual history setting similar to [2], which introduces more severe spurious correlations by fixing frames. \\n| | | Premier-TACO[3] | Ours |\\n|------------|--------------|--------------|-------|\\n| Scenario 1 | Reward| **405.87** | 371.00|\\n| Scenario 5 | Reward| 346.12 | **396.11** |\\n| Scenario 6 | Reward| 203.43 | **311.53** | \\n\\nThe experimental result further suggests that our approach is capable of reducing the effect of spurious correlations, and therefore, maintaining good performance. \\n\\n\\n>**W3. Lack of evaluation on CARLA benchmark instead of self-constructed scenarios.**\\n\\nWe appreciate the reviewer for the constructive comment. Due to the limited time and platforms, we are not able to evaluate methods on the CARLA benchmark currently. \\n\\n>**Question: The improvements in those scenarios look promising. But, how to demonstrate it's coming from reducing the confounding factors as shown in previous sections?**\\n\\nWe thank the reviewer for the valuable feedback. We agree that it may not be concluded that our improvements come from controlling the confounding factors: as both the true state $s\\\\_t$ and the inferred state $\\\\hat{s}\\\\_t$ are still influenced by previous actions $a\\\\_{t-l:t-1}$. In contrast, the goal of our paper is to get rid of the reliance on shortcuts that are introduced by $a\\\\_{t-l:t-1}$ on the imitator's decision $\\\\hat{a}\\\\_t$. To do this, our framework incorporates several designs to mitigate the effect of spurious correlations. \\nIn evaluation, we have modified several environmental parameters such as weather conditions, camera angle, and traffic density to create scenarios that are significantly different from the samples in the training set. Therefore, the imitator is required to learn beyond the spurious correlations so that its strategy can generalize well in the testing phase, which our approach proved to do.\\n\\n[1]Wen, Chuan, et al. \\\"Keyframe-focused visual imitation learning.\\\" *arXiv preprint arXiv:2106.06452.* 2021. \\n[2]Chuang, Chia-Chi, et al. \\\"Resolving copycat problems in visual imitation learning via residual action prediction.\\\" *European Conference on Computer Vision.* 2022. \\n[3]Zheng, Ruijie, et al. \\\"Premier-TACO is a Few-Shot Policy Learner: Pretraining Multitask Representation via Temporal Action-Driven Contrastive Loss.\\\" *Forty-first International Conference on Machine Learning.* 2024.\"}", "{\"title\": \"Response to Reviewer 31eB 1/2\", \"comment\": \"We thank the reviewer for the constructive comments. Regarding the concerns of the Reviewer 31eB, we provide the following response.\\n>**W1.1. It's not sufficient to directly map $o\\\\_t$ to $a_t$: This remains an untested hypothesis.** \\n\\nWe thank the reviewer for pointing this out. Indeed, such a hypothesis may not be valid in certain circumstances. Previous studies[1,2,3] also found that the introduction of observations in the past $o\\\\_{t-l:t-1}$ does not always comes with benefits. \\nHowever, our paper focuses on the imitation learning scheme for the driving task. Due to the mismatch between the imitator's observation $o\\\\_t$ and the true states $s\\\\_t$ utilized by the expert policy, the scenarios are commonly modeled with POMDPs[2,3]. The partial observability of actual state $s\\\\_t$ requires the imitator to infer the true state by utilizing information in the observation history. Therefore, we assume that it is not sufficient to directly map $o\\\\_t$ to $a\\\\_t$. \\nWe also conduct experiments to investigate the effect of history observations on the imitators' performance, by replacing $o\\\\_{t-l:t-1}$ with values of $o\\\\_t$. The performances of imitators have significant deteriorations, suggesting that the observations in the past did provide useful indications for imitators to recover expert policy. A detailed discussion about the effect of observation history is provided in Appendix A.4.1 in our updated manuscript.\\n| | | Premier-TACO | Ours |\\n|------------|--------------|--------------|-------|\\n| Scenario 1 | Dropped Reward (%) | 13.64 | 28.82 |\\n| Scenario 5 | Dropped Reward (%) | 33.01 | 26.44 |\\n| Scenario 6 | Dropped Reward (%) | 38.95 | 30.34 | \\n\\n>**W1.2. Learning a decoder for $\\\\hat{s}\\\\_t$ helps it match with expert $s\\\\_t$: On the contrary, learning a decoder for $\\\\hat{s}\\\\_t$ could force the encoder to focus on every detail in the image, even the ones that do not directly contribute to ground $s\\\\_t$.** \\n\\nThank you for pointing this out. That\\u2019s true, the supervision of image reconstruction does force the encoder to focus on every detail in the image sequence, which will inevitably introduce unwanted features that are not directly related to ground $s\\\\_t$. \\nHowever, as we have no access to the ground truth $s\\\\_t$, one can not distinguish all causal features (features that directly contribute to ground $s\\\\_t$) from non-causal features without introducing further assumptions or prior knowledge. Moreover, the potential mismatch between assumptions and the underlying mechanisms of the target system will introduce risks to the overall performance of the predictors that focus on only causal features, as shown by the empirical study[4]. \\nTherefore, in this work, we consider maintaining information observed by the imitator as much as possible in the representation $\\\\hat{s}\\\\_t$, and assigning the imitator the ability to infer the future, which is examined by the quality of reconstrued future image observation. Moreover, the representation is also governed by the supervised contrastive learning loss $\\\\mathcal{L}\\\\_{\\\\text{RNC}}$, encouraging it to match with expert $s\\\\_t$, in the measurement of alignment with corresponding action label $a\\\\_t$. \\n>**W1.3. Since $\\\\Delta a\\\\_t$ is inferred from $s\\\\_t$, it doesn't learn the spurious correlation: This assumption can be wrong since $s\\\\_t$ would contain information from $a\\\\_{t-1}$.** \\n\\nWe totally agree with the reviewer that $s\\\\_t$ would carry information about $a\\\\_{t-1}$, and the influence from $a\\\\_{t-1}$ would remain in both $\\\\hat{s}\\\\_t$ and $s\\\\_t$. However, the idea of introducing the action residual prediction $\\\\Delta a\\\\_t$ task, is to properly express the past action influence on current action, through capturing action variations ($\\\\Delta a\\\\_t = a\\\\_t-a\\\\_{t-1}$). Therefore, we aim not to block all possible influence of $a\\\\_{t-1}$, but to prevent the imitator from relying on shortcuts. \\n\\n>**W1.4. The proposed method is better than baselines in most scenarios, but is that because of the design choices or just better models or bigger capacities?**\\n\\nWe thank the reviewer for raising this thoughtful question. To appropriately state the contribution of our approach, the baseline models that we implemented all use the same network design (i.e., the representation model, decoder, and predictor), the same input specs, and the training strategy. Therefore, the performance differences between methods majorly lay in their design choices. We also provide detailed introductions about baselines used in our experiments and the ways we implemented them (in Appendix A.3.1 in our updated manuscript).\"}", "{\"title\": \"Response to Reviewer MYqZ 1/2\", \"comment\": \"We thank the reviewer for the constructive comments and positive feedback on our paper. Regarding the concerns of the Reviewer MYqZ, we provide the following response.\\n\\n>**W1. Choice of importantce weighting strategies.**\\n\\nThanks for the valuable suggestion. Akin to Keyframe[1], the sample-weighting strategy in our approach aims to identify potential changepoints and abnormal scenes in the training dataset.\\nInstead of using prediction error from a copycat policy in [1], we utilize the errors of action residual prediction as evidence to assign samples with different importance. Because the action residual predictor receives features from the representation model directly, which can locate important samples more precisely, compared to the copycat policy that only takes actions in previous frames as input. \\nWe have added the Keyframe approach as a baseline and compared it in Section 4. We briefly report the average collision rates below. \\n\\n| | Scenario 1 | Scenario 2 | Scenario 3 | Scenario 4 | Scenario 5 | Scenario 6 |\\n|----------|------------|------------|------------|------------|------------|------------|\\n| Keyframe[1] | 0.58 | 0.57 | 1.56 | 0.31 | 0.53 | 0.64 |\\n| Ours | **0.54** | **0.46** | **1.25** | **0.27** | **0.48** | **0.59** | \\n\\nWe also added Appendix A.4.2 to further study the importance weighting process, in which both visualization and the effect of switching the weighting function are provided.\\n\\n>**W2. Imitating in two separate stages versus end-to-end.**\\n\\nWe thank the reviewer for the valuable question and suggestion. We divide the learning process into two separate stages, based on the purpose of addressing the inherent complexity of the visual imitation learning task. The separation enables each module (the representation model and predictor model) to focus on distinct aspects of the task, and contributes to a more stable and efficient training process by decoupling the challenges of feature extraction and policy optimization. \\nIn terms of comparison with end-to-end approaches, we have implemented CIL[2] and Keyframe[1] with the end-to-end setting, we briefly report the average accumulated rewards below, alongside the results of two approaches that conduct the learning process in a two-stage manner. While the separate stages design generally works better in our conducted experiments, we look forward to further exploring the difference between separate stages and end-to-end training in our future work.\\n| | Scenario 1 | Scenario 2 | Scenario 3 | Scenario 4 | Scenario 5 | Scenario 6 |\\n|--------------|------------|------------|------------|------------|------------|------------|\\n| CIL[2] | 330.49 | 12.14 | 247.29 | 345.00 | 7.18 | 45.93 |\\n| Keyframe[1] | 353.83 | 309.70 | 125.30 | 529.68 | 278.95 | 215.77 |\\n| Premier-TACO[3] | 469.99 | 431.22 | 204.38 | 561.13 | 516.70 | 331.29 |\\n| Ours | **521.26** | **587.44** | **420.38** | **966.35** | **538.50** | **447.27** |\"}", "{\"summary\": \"To deal with the causal confusion problem in imitation learning, authors take the inspiration from causal learning and propose the 3CIL framework which integrates contrastive learning, action residual prediction, and importance weighting techniques together. Testing on autonomous driving benchmark CARLA, 3CIL achieves good performance with higher success rate and lower collision times than the baselines.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The proposed framework is well-motivated.\\n\\nThe experimental result looks good.\", \"weaknesses\": \"1.\\tFor importance weighting, would you please intuitively or theoretically explain the motivation for using the errors of action residual prediction as weights, instead of other choices, e.g., AdaBoost or Keyframe? Or would you please compare with them?\\n\\n2.\\tWhy is it necessary to divide the imitating process into two separate stages? I suggest comparing with training the representation modules and the policy end-to-end.\\n\\n3.\\tIn Table 1, why is the performance of 3CIL in scenario 3 worst across all scenarios? Even worse than the unseen ones, i.e., 5 and 6.\\n\\n4.\\tMore ablation studies, analysis experiments, and visualizations are necessary. The current experimental results only contain the comparison with baselines and two simple ablation studies. I suggest having more experiments and visualizations to verify your arguments that 3CIL successfully removes the spurious correlation and importance weighting correctly finds the rare scenarios.\\n\\n5.\\tMissing some important references [2,3,4].\\n\\n\\n[1] Keyframe-focused visual imitation learning\\n\\n[2] Chauffeurnet: Learning to drive by imitating the best and synthesizing the worst\\n\\n[3] Fighting Fire with Fire: Avoiding DNN Shortcuts through Priming\\n\\n[4] Shaking the foundations: delusions in sequence models for interaction and control\", \"questions\": \"Please refer to the Weaknesses part.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The authors propose to combine causal reasoning techniques to assist imitation for autonomous driving. Specifically, the paper presents a novel approach, causality-inspired contrastive conditional imitation learning (3CIL), which integrates contrastive learning and action residual prediction. The framework is based on POMDP, trying to mimic the scenarios when the expert and imitator share different views.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"The idea of combining causality, contrastive learning and conditional imitation learning is quite interesting. The performance seems to be good for all scenarios. Fig. 1 is intuitive.\", \"weaknesses\": \"My major concerns lie in theoretical clarity, experimental design, and practical applicability. In the absence of strong theoretical guarantees, the paper would benefit greatly from robust experimental results. Additionally, it\\u2019s crucial to provide a clear rationale for the integration of causality, contrastive learning, and conditional imitation learning, explaining why this combination is necessary. I have outlined specific questions below.\", \"questions\": \"### **Some high-level questions**\\n1. In \\\"3CIL: Causality-Inspired Contrastive Conditional Imitation Learning,\\\" the abbreviation \\\"CIL\\\" is first defined in line 162. However, it\\u2019s unclear what the \\\"C\\\" stands for. Is it \\\"Causality\\\" or \\\"Conditional\\\"? Could you clarify this term for consistency?\\n\\n2. Does the input for the VAE include all the past history observations $o_{1:t}$?\\n\\n3. Some important details are in the appendix. The authors should consider move them to the major context, e.g., Fig. 4 and Fig. 5. \\n\\n### **Some questions about theoretical guarantee**\\n1. As discussed in (Ruan et al., 2022; Ruan & Di, 2022; Kumor et al., 2021), unobserved confounders (UCs) often complicate causal identifiability, and certain variables must be considered to mitigate the influence of spurious correlations or shortcuts during the learning process. In your approach, which specific variables are crucial to achieving this objective? E.g., which variables are required to block all the backdoor paths.\\n\\n2. Following up on the previous question, if all the past ground-truth states $s_{1:t}$ are unobserved (common in POMDPs), and $a_{t-1} \\\\leftarrow s_{t-1} \\\\rightarrow s_{t} \\\\rightarrow a_{t}$ is active, in other words, there is an active backdoor path between $a_{t-1}$ and $a_{t}$ which can never be blocked. Given that this path cannot be blocked, is the policy learning process identifiable or robust? How do you ensure convergence in policy learning under these circumstances? \\n\\n3. The idea of training a representation model $G$ is not that novel. Especially, when $\\\\hat{s}_{t}$ is unobserved, it is very hard to directly determine whether the representation model is good or not. While simulations may provide insights, evaluating the model\\u2019s practical effectiveness in real-world conditions can be significantly harder. What methods do you suggest to compare model performance outside of a simulation environment?\\n\\n4. The proposed method appears to be a pipeline structure (i.e., representation model + policy model).\\n \\n 4.1 IIf overall performance is not expected, what strategy would you recommend to isolate and improve the specific component responsible? How can one effectively determine whether limitations stem from the representation model or from the policy?\\n \\n 4.2 Will there be any cascaded errors from upstream to downstream tasks? Could you elaborate on any mechanisms in place to mitigate such cascading errors?\\n\\n### **Some questions about experiments**\\n\\n1. In your experiments, does the imitator have access to the reward $R$? Additionally, are the expert demonstrations generated from RL algorithms that use the same reward function $R$? \\n\\n2. For the observations, the expert is able to observe $s_{t}$, but the imitator is only able to observe $o_{t}$. Is that correct?\\n\\n3. To what extent does the reward $R$ reflect real-world driving behaviors? How accurately does it capture the dynamics observed in actual driving scenarios?\\n\\n4. Could the authors add more details to the reward $R$? Specifically, how are the four components $r$ defined? If the primary objective is to evaluate route adherence, is $r_{position}$ alone sufficient, or are the other rewards essential? Please clarify the role of each reward component.\\n\\n5. Additional metrics could enhance the evaluation process, such as the RMSE between predicted positions and target routes. Would the authors consider including these metrics for a more comprehensive analysis?\\n\\n**Should the authors address these questions thoroughly, I would consider raising my evaluation score.**\\n\\n---------\", \"references\": [\"Pearl, Judea. Causality. Cambridge university press, 2009.\", \"Peters, Jonas, Dominik Janzing, and Bernhard Sch\\u00f6lkopf. Elements of causal inference: foundations and learning algorithms. The MIT Press, 2017.\", \"Ruan, Kangrui, and Xuan Di. \\\"Learning human driving behaviors with sequential causal imitation learning.\\\" Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 36. No. 4. 2022.\", \"Ruan, Kangrui, et al. \\\"Causal imitation learning via inverse reinforcement learning.\\\" The Eleventh International Conference on Learning Representations. 2023.\", \"Kumor, Daniel, Junzhe Zhang, and Elias Bareinboim. \\\"Sequential causal imitation learning with unobserved confounders.\\\" Advances in Neural Information Processing Systems 34 (2021): 14669-14680.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer 9A5c 2/3\", \"comment\": \">**Q2.3.Novelty in representation model, determining the quality of representation model, gap between simulation and real environment.**\\n\\nWe thank the reviewer for the constructive comments. Training the representation model $G$ for the imitator is indeed not fresh in the community, still, $G$ is an essential part that affects the overall imitation performance, and our contribution majorly lies in the incorporation of supervised contrastive learning which provides important guidance in the representation learning phase. As one cannot access the ground truth state $s\\\\_t$, it is indeed difficult to determine the quality of learned representation. Therefore, we assume the optimality of the expert's action $\\\\hat{a}\\\\_t$, and regularize $G$ to build an alignment between inferred state $\\\\hat{s}\\\\_t$ and $\\\\hat{a}\\\\_t$ with the supervised contrastive learning loss. \\nWe agree that the gap of sim2real requires effort, and our current approach is far from real-world deployments. Human-in-the-loop evaluation could provide useful insights in practice. However, as the driving task in the real world requires orchestrating numerous modules, we cannot conclude that a certain evaluation will fully validate the model's performance. \\n\\n>**Q2.4.1. If overall performance is not expected, what strategy would you recommend to isolate and improve the specific component responsible? How can one effectively determine whether limitations stem from the representation model or from the policy?**\\n\\nWe thank the reviewer for the valuable questions. The separation of learning representation model $G$ and learning policy $J$ does bring complexities in optimizing toward the driving task. \\nIn our work, as we introduce the future image observation reconstruction task $\\\\mathcal{L}_{\\\\text{fo}}(\\\\hat{o}\\\\_{t+1}, o\\\\_{t+1})$ to provide supervisions on $G$'s capability in inferring dynamics and preserving information of raw observations $h\\\\_{t} = (o\\\\_{t-l:t}, v\\\\_{t-l:t})$, the quality of the reconstructed image can provide clues for analyzing performance limitations on the representation model side. We can also permute the partial input of $G$ to investigate the effects of certain features on the specific components. \\nOn the policy side, beyond the supervision from offline datasets, we can also inspect the output smoothness, or perform minor perturbation on the obtained representation $\\\\hat{s}\\\\_t$ to investigate the robustness of policy.\\n\\n>**Q2.4.2. About cascading errors.** \\n\\nThanks for the valuable questions. The design of separation will introduce errors from $G$ to $J$, as UCs remain uncontrolled, and the unobservability of the ground truth states. To mitigate such an issue, we propose a sample-weighting strategy, by utilizing the errors in the residual action prediction (RAP) task as proxies of the extent of error in $G$. We assign high weights on samples that obtain high errors in RAP, to enforce the policy $J$ learning such abnormal scenes. With this mechanism, $J$ is encouraged to match with the expert behaviors in these samples properly, even though the upstream $G$ has failed to do so. A detailed discussion and the visualization of the sample-weighting process are provided in Appendix A.4.2.\\n\\n>**Q3.1. Imitator's access to Reward $R$, same reward fucntion for the expert?**\\n\\nThanks for the valuable questions. In our experiments, the imitator has no access to $R$, both in the training and testing stages. The expert demonstrations are generated by an RL agent which is pre-trained with PPO, and with the same reward function.\\n\\n>**Q3.2. For the observations, the expert is able to observe $s\\\\_t$, but the imitator is only able to observe $o\\\\_T$. Is that correct?**\\n\\nThat is correct. In our experiments, the expert has access to the ground truth information, in the form of bird-eye view images, $s\\\\_t$ stands as the information within the bev images in timestep $t$. The imitator only has access to $o\\\\_t$, the image captured by the RGB camera, and a corresponding measurement vector $v\\\\_t$ which provides navigation commands and the speed of the ego vehicle.\"}", "{\"metareview\": \"This paper addresses the problem of causal confusion in imitation learning from raw sensory data. The authors identify three factors contributing to causal confusion and propose techniques to mitigate these effects. Experiments conducted in a simulated driving domain demonstrate the effectiveness of the proposed method, with comparisons to several baselines.\\n\\nThe reviewers acknowledge the significance of the problem setting and the novelty of the proposed method. However, reviewers express concerns regarding the technical and conceptual contributions of the work. Specifically, the reviewers highlight the absence of strong theoretical guarantees or empirical evidence supporting the claim that the learned policy is truly causal. They also suggest that the paper's presentation could be improved and that additional analyses are needed to enhance the audience's understanding of the contributions.\\n\\nOverall, while the paper addresses an important problem and proposes interesting ideas. However, it can be further improved in terms of analysis, experiments, and clarity of presentation. The authors are encouraged to address these aspects in future revisions to strengthen the impact of their work.\", \"additional_comments_on_reviewer_discussion\": \"Authors added additional experiments during rebuttal period to address reviewers' concerns.\"}", "{\"comment\": \"Thank you for your response! My concerns are mostly addressed and I will keep my score.\"}", "{\"title\": \"Response to Reviewer MYqZ 2/2\", \"comment\": \">**W3. Performance gap between Scenario 3 and other scenarios.**\\n\\nWe appreciate the reviewer for the great efforts in reviewing. Such a gap majorly comes from the environmental parameter setting, as we set Scenario 3 to have more vehicles (=200) compared to other scenarios (mean=100). The contradiction between compact map size and heavy traffic load in Scenario 3 poses stress on imitators and leads to their highest collision rates in the design experiments. \\nMoreover, the majority of methods did not exhibit significant drops in unseen environments (Scenario 5,6), which can be attributed to the conditional imitation learning setting we considered in this paper. The direct navigation information shown to the imitators can offer strong indications in the deployment phase, which eases the challenge coming from unseen environments.\\n\\n>**W4. More ablation studies, analysis experiments, and visualizations are necessary.**\\n\\nThanks for the constructive suggestions. We have presented more studies in Appendix A.4. We briefly conclude the revisions as follows. \\n(1)For more ablation studies and analysis experiments: We study the effectiveness of major components in our design framework and present them in Appendix A.4.2 and A.4.3. \\n(2)For more visualizations: We present Figure 7 in Appendix A.4.2 to illustrate the process of our proposed importance weighting strategy. \\n(3)For further investigate the capability of our method in the presence of severe spurious correlations, we conduct an experiment under the counterfactual history setting and present the result and discussion in Appendix A.4.1. \\n\\n>**W5. Missing some important references.**\\n\\nThanks for pointing out the missing references. We agree that these works are truly important for imitation learning research, and we have incorporated them in the revised manuscript. Thank you again for your valuable feedback.\\n\\n[1]Wen, Chuan, et al. \\\"Keyframe-focused visual imitation learning.\\\" *arXiv preprint arXiv:2106.06452.* 2021. \\n[2]Codevilla, Felipe, et al. \\\"End-to-end driving via conditional imitation learning.\\\" 2018 IEEE international conference on robotics and automation (ICRA). IEEE, 2018. \\n[3]Zheng, Ruijie, et al. \\\"Premier-TACO is a Few-Shot Policy Learner: Pretraining Multitask Representation via Temporal Action-Driven Contrastive Loss.\\\" *Forty-first International Conference on Machine Learning.* 2024.\"}", "{\"title\": \"Officie Comment by Reviewer 9A5c\", \"comment\": \"Thank you for your detailed response, which has addressed most of my concerns. To further streamline the review process and ensure transparency, I suggest that the authors consider including a concise **global response or summary of rebuttals**. This could highlight the key modifications made to the paper, specifying what is newly added and where these additions have been incorporated. While this may not be a standard practice, it would greatly enhance clarity and assist reviewers in assessing the revisions comprehensively.\"}", "{\"title\": \"Response to Reviewer 9A5c 3/3\", \"comment\": \">**Q3.3 & Q3.4. About reward function.**\\n\\nThank you for your valuable questions. We have provided a detailed introduction about the components of $R$ in Appendix A.3.5. Here, we briefly state the purposes of components in the reward $R=r_{speed} + r_{position} + r_{rotation}+r_{action}$. \\nBased on the current circumstance of the agent (e.g., has another vehicle close to itself, or in front of a red light), the desired speed of the agent is varying, which affects the assignment of $r_{speed}$. The two components $r_{position}$ and $r_{rotation}$ are computed based on the distance and angle difference with respect to the nearest navigation point. The $r_{action}$ punishes the agent for large variations in its actions.\\nDue to the complexity of real-world driving behavior, the $R$ used in our work is not enough to fully depict the desired driving policy. However, it can reflect the driving behavior to some extent, as the four components together require an agent to drive toward the destination while maintaining the corresponding desired speed for each circumstance, and avoiding steep changes in its actions. Therefore, we believe that components in $R$ are essential to evaluate an agent's performance.\\n\\n>**Q.3.5 Adding additional metrics.**\\n\\nWe thank the reviewer for the constructive suggestions. We agree that additional metrics will aid the experiments and analysis. However, due to time constraints, we are unable to report the performance of methods in these metrics. We appreciate the reviewer\\u2019s suggestion, and we plan to include more metrics in future work to further validate the effectiveness of methods.\\n\\nWe are sincerely grateful for your time and efforts in the review process.\\n\\n[1]Ruan, Kangrui, and Xuan Di. \\\"Learning human driving behaviors with sequential causal imitation learning.\\\" *Proceedings of the AAAI Conference on Artificial Intelligence.* 2022. \\n[2]Ruan, Kangrui, et al. \\\"Causal imitation learning via inverse reinforcement learning.\\\" *The Eleventh International Conference on Learning Representations.* 2023. \\n[3]Kumor, Daniel, Junzhe Zhang, and Elias Bareinboim. \\\"Sequential causal imitation learning with unobserved confounders.\\\" *Proceedings of the 35th International Conference on Neural Information Processing Systems.* 2021. \\n[4]Zhang, Junzhe, Daniel Kumor, and Elias Bareinboim. \\\"Causal imitation learning with unobserved confounders.\\\" *Proceedings of the 34th International Conference on Neural Information Processing Systems.* 2020. \\n[5]Seo, Seokin, et al. \\\"Regularized behavior cloning for blocking the leakage of past action information.\\\" *Proceedings of the 37th International Conference on Neural Information Processing Systems.* 2023. \\n[6]Park, Jongjin, et al. \\\"Object-aware regularization for addressing causal confusion in imitation learning.\\\" *Proceedings of the 35th International Conference on Neural Information Processing Systems.* 2021.\"}", "{\"comment\": \"Thank you for the valuable suggestion. We are glad to have addressed your concerns. We have added a general response to introduce the key modifications. We also highlighted them with blue text color in our updated manuscript. Your support would be instrumental in enhancing the quality and impact of our work.\"}" ] }
4Po8d9GAfQ
Language Models are Hidden Reasoners: Unlocking Latent Reasoning Capabilities via Self-Rewarding
[ "Haolin Chen", "Yihao Feng", "Zuxin Liu", "Weiran Yao", "Akshara Prabhakar", "Shelby Heinecke", "Ricky Ho", "Phil L Mui", "Silvio Savarese", "Caiming Xiong", "Huan Wang" ]
Large language models (LLMs) have shown impressive capabilities, but still struggle with complex reasoning tasks requiring multiple steps. While prompt-based methods like Chain-of-Thought (CoT) can improve LLM reasoning at inference time, optimizing reasoning capabilities during training remains challenging. We introduce LaTent Reasoning Optimization (LaTRO), a principled framework that formulates reasoning as sampling from a latent distribution and optimizes it via variational approaches. LaTRO enables LLMs to concurrently improve both their reasoning process and ability to evaluate reasoning quality, without requiring external feedback or reward models. We validate LaTRO through experiments on GSM8K and ARC-Challenge datasets using multiple model architectures. On GSM8K, LaTRO improves zero-shot accuracy by an average of 12.5\% over base models and 9.6\% over supervised fine-tuning across Phi-3.5-mini, Mistral-7B, and Llama-3.1-8B. Our findings suggest that pre-trained LLMs possess latent reasoning capabilities that can be unlocked and enhanced through our proposed optimization approach in a self-improvement manner.
[ "Large language model", "Optimizing LLM reasoning capabilities", "Self-improvement", "Reward model-free optimization", "Reinforcement learning" ]
Reject
https://openreview.net/pdf?id=4Po8d9GAfQ
https://openreview.net/forum?id=4Po8d9GAfQ
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zMl0wOOr27", "uO3weKqead", "tzaJhYzJNn", "qK1Va2I3UR", "maGJEVcG2a", "KNuduTBvGX", "JGF0IxRxAu", "8pMPi3D3Ns" ], "note_type": [ "official_review", "official_review", "official_review", "official_comment", "meta_review", "official_review", "decision", "official_review" ], "note_created": [ 1730217656700, 1730119256094, 1730740051800, 1732750570925, 1734877773156, 1729870612154, 1737523956134, 1730430503530 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission9035/Reviewer_dFSv" ], [ "ICLR.cc/2025/Conference/Submission9035/Reviewer_XUrD" ], [ "ICLR.cc/2025/Conference/Submission9035/Reviewer_b1ci" ], [ "ICLR.cc/2025/Conference/Submission9035/Authors" ], [ "ICLR.cc/2025/Conference/Submission9035/Area_Chair_45Cp" ], [ "ICLR.cc/2025/Conference/Submission9035/Reviewer_WHVo" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission9035/Reviewer_n3ec" ] ], "structured_content_str": [ "{\"summary\": \"This paper introduces LaTRO, a novel approach that formulates Chain-of-Thought (CoT) reasoning as sampling from a latent distribution, optimized through variational techniques. By leveraging the probability of generating the correct answer as an implicit reward, LaTRO unifies the learning of both the policy and reward models, allowing large language models to refine reasoning paths in a self-rewarding manner. The authors demonstrate LaTRO\\u2019s effectiveness through experiments on the GSM8K and ARC-Challenge datasets across various model architectures. Their results indicate that latent reasoning capabilities within pre-trained language models can be unlocked and enhanced using this self-improvement framework.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper offers a novel perspective by framing reasoning as a process of sampling from a latent distribution and addressing it through variational methods.\\n\\n2. The paper leverages the model's own probability estimates as an implicit reward, unifying the training of the policy and reward models.\\n\\n3. The paper is well-organized and easy to follow.\", \"weaknesses\": [\"1. The experimental setup lacks sufficient strong baselines, which are essential for a robust evaluation. Two key baselines to consider are:\", \"A stronger SFT Baseline: Fine-tuning the policy model with correct reasoning paths. Given the availability of ground truth answers, multiple reasoning paths could be sampled, retaining only those that align with the ground truth. This baseline would provide a more rigorous comparison for evaluating LaTRO\\u2019s effectiveness in reasoning.\", \"DPO Baseline: The authors could further fine-tune the policy model using the DPO algorithm, incorporating both correct and incorrect reasoning paths. Actually, the DPO algorithm aligns closely with LaTRO in its approach, as both methods aim to avoid training an explicit reward model. Including DPO as a baseline would highlight LaTRO\\u2019s strengths relative to an approach that similarly leverages implicit reward mechanisms.\", \"2. The experimental scope is limited, as only two datasets and small models were tested.\", \"Expanding the experiments to include a wider range of reasoning datasets would better assess the model's reasoning capabilities. Standard practice for evaluating reasoning in large language models includes diverse datasets that cover arithmetic reasoning (only GSM8K is not enough for arithmetic reasoning evaluation), commonsense reasoning, symbolic reasoning, and other reasoning types. Incorporating these would provide a more comprehensive evaluation.\", \"Testing across varying model scales, especially with larger models, could provide insights into how the approach scales with model size and whether larger models yield better reasoning performance.\", \"3. Although the authors claim that training did not rely on external feedback, the ground truth answers effectively serve as a form of implicit external feedback or reward.\"], \"questions\": \"1. In Proposition 1, $p(y|x) = \\\\int p(y|z,x) p(z|x) dz$ holds for any CoT-based method. Why, then, is CoT-SC introduced here?\\n\\n2. What is the definition of \\\"golden rationales\\\", and why can\\u2019t ARC-Challenge have golden rationales?\\n\\n3. What can the experimental results on ARC-Challenge demonstrate? The two baselines are too weak, as the SFT baseline did not utilize rationales.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces LaTRO, a framework that enhances LLM\\u2019s reasoning abilities by treating reasoning as sampling from a latent distribution and optimizing it with variational methods. LaTRO allows LLMs to improve their reasoning process and evaluation of reasoning quality simultaneously, without external feedback. Experiments on GSM8K and ARC-Challenge datasets demonstrate the effectiveness of LaTRO compared with the SFT training method.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. LaTRO regards the reasoning process as sampling from a latent distribution and optimizes it using a variational method. This approach is different from prompt-based methods such as CoT and is closer to unsupervised learning. Besides, the feasibility of LaTRO is verified by mathematical proof.\\n\\n2. This paper focuses on a very interesting topic, which enables an LLM to improve itself through a self-reward mechanism without external supervision and feedback signals.\", \"weaknesses\": \"1. Request a step-by-step description or flowchart of the LaTRO pipeline. Although a large number of formulas are used in this paper to explain each part\\uff0cbut it lacks a detailed description of the proposed method, which makes it difficult for readers to understand the complete pipeline of the proposed method and reproduce it.\\n\\n2. As far as I know, there are some works to gradually enhance the capabilities of LLM (not limited to reasoning task), including prompt-based [1][2][3] and training-based methods [4][5][6], some of which do not use any feedback information to enhance the capabilities of LLM [7][8]. The author should discuss the differences between this work and these works and compare the performance of these works. It is necessary to add a dedicated subsection in the Related Work section discussing these specific works and their methods. If possible, include performance comparisons with these methods in the experimental section.\\n\\n[1] When can llms actually correct their own mistakes? A critical survey of self-correction of llms\\n\\n[2] Learning From Mistakes Makes LLM Better Reasoner\\n\\n[3] Mirror: A Multiple-perspective Self-Reflection Method for Knowledge-rich Reasoning\\n\\n[4] REFINER: Reasoning Feedback on Intermediate Representations\\n\\n[5] CRYSTAL: Introspective Reasoners Reinforced with Self-Feedback\\n\\n[6] SELF: Language-driven Self-evolution for Large Language model\\n\\n[7] Small language modes can self-correct\\n\\n[8] Think Thrice Before You Act: Progressive Thought Refinement in Large Language Models\\n \\n3. In the experiments, the author only employs SFT training method and the base model as baselines, without comparing the performance with COT-SC mentioned in Section 3 and the self-improvement classic work mentioned above, making it difficult to demonstrate the advantages of the proposed LaTRO. So, please provide a more comprehensive analysis of how LaTRO performs relative to these additional baselines across different tasks and metrics.\", \"questions\": \"1. About self-evaluation task in LaTRO: how does LaTRO evaluate the probability of each reasoning path producing the correct answer? What does the conditional probability mention in Section 4.1 mean? Is there a task restriction for this evaluation method? What is the accuracy of self-evaluation? The authors did not discuss these questions in the paper, nor did they explore them in depth in the experiments.\\n\\n2. The authors only used a formula to explain the use of self-reward signals to achieve parameter updates, so what exactly does this update parameter refer to during the training phase? How is it implemented? Suggest provide more detailed information.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The work focuses on enhancing the reasoning abilities of large language models (LLMs) during the training phase without relying on external feedback. The authors motivate the work by raising an important question on improving the reasoning ability of LLMs during training phase since most prior works have focussed at achieving this at inference time. To do so, the authors propose to sample diverse reasoning rationales from latent distribution and optimize the LLM through a variational framework using the sampled rationales. The intuition behind application of the variational framework is well motivated through the objective of self-consistency based chain-of-thought prompting. Further, the work proposes to usethe likelihood of generating correct answer conditioned on a rationale as a proxy to explicit reward models to optimise the LLM towards generating better reasoning chains. Results demonstrate that the proposed LaTRO method helps in improving the accuracy achieved for multiple LLMs such as Mistral-7B, Llama-3.1-8B etc. on GSM8K and ARC-Challenge datasets compared to the corresponding base and the SFT versions of the LLMs.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. Enhancing the reasoning ability of LLMs without relying on feedback from any external LLM during the training phase is an important research question.\\n2. Suitability of applying the variational objective is well motivated, explained and justified in Sections 3 and 4.1.\\n3. Accuracy gains over the base and the SFT version of the LLMs are shown with some ablation studies on greedy decoding vs. self consistency based sampling. Further, the authors show that inference time scaling of number of reasoning samples obtained using LaTRO can additionally enhance the accuracy.\", \"weaknesses\": \"1. Presentation of the introduction section of the paper needs improvement - It should better motivate the need behind reducing the reliance on external LLMs/feedback for improving a given LLM. Further, it should provide information about why the variational framework is suitable and elaborate some details about the proposed method in the introduction itself.\\n2. Discussion on related work is very limited. Even though the proposed method is an effort to reduce reliance on external LLMs, it should still discuss and contrast against existing methods that leverage external LLMs (for example, [1,2]) as well as employ self-play based techniques to improve an LLM by itself with requiring help from any other LLM (for example, [3, 4]). The example references are only indicative (but not exhaustive) of the type of citations that should be included.\\n3. Lack of appropriate baselines - Even though the work claims and focusses at improving the base/SFT version of the LLM through the proposed LaTRO framework, it should still compare against some of the existing self-play trainable methods (by adapting them for 7B LLMs) as baselines (eg. [3] - on datasets where the ground-truth rationale is available and [4]). Such methods improve the rationales generated by an LLM by exploring the generation space and discriminating better rationales from the inferior ones.\\n4. More reasoning datasets such as CSQA, Hellaswag etc. should be considered for evaluation studies.\\n\\n[1] Distilling Step-by-Step! Outperforming Larger Language Models with Less Training Data and Smaller Model Sizes. Cheng-Yu Hsieh, Chun-Liang Li, Chih-kuan Yeh, Hootan Nakhost, Yasuhisa Fujii, Alex Ratner, Ranjay Krishna, Chen-Yu Lee, and Tomas Pfister. In Findings of the Association for Computational Linguistics: ACL 2023, pages 8003\\u20138017, Toronto, Canada. Association for Computational Linguistics.\\n\\n[2] Zephyr: Direct Distillation of LM Alignment. Lewis Tunstall and Edward Emanuel Beeching and Nathan Lambert and Nazneen Rajani and Kashif Rasul and Younes Belkada and Shengyi Huang and Leandro Von Werra and Clementine Fourrier and Nathan Habib and Nathan Sarrazin and Omar Sanseviero and Alexander M Rush and Thomas Wolf. First Conference on Language Modeling, 2024.\\n\\n[3] Self-Play Fine-Tuning Converts Weak Language Models to Strong Language Models. Zixiang Chen and Yihe Deng and Huizhuo Yuan and Kaixuan Ji and Quanquan Gu. ICML 2024.\\n\\n[4] Self-Rewarding Language Models. Weizhe Yuan, Richard Yuanzhe Pang, Kyunghyun Cho, Xian Li, Sainbayar Sukhbaatar, Jing Xu, Jason Weston\", \"questions\": \"1. Line 290: It is not clear about how the first gradient term in Eq. 4 would lead to optimising the LLM policy to generate higher quality rationales. Further elaboration is needed as to why generating/optimising the likelihood of input question/instruction conditioned on the rationale would lead to better reasoning.\\n2. Lines 414-415: Please support with examples about why improvements on ARC-Challenge are relatively lesser. The magnitude of improvements is not at all an issue, however, it should be better demonstrated that why better reasoning chains are not leading to higher improvements. Does this make ARC-Challenge ill-suited for this study since it involves limited reasoning scope?\\n3. Line 430: Why is it needed to generate rationales that are as long as 500 tokens? It would be good to show the usefulness of the information contained in the longer rationales and why does it lead to better performance before saturating.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We thank the reviewers for their insightful advices. It is good hear that our method is presented in a clear way and we also agree that we did not conduct enough experiments to demonstrate the effectiveness of our algorithm empirically.\\n\\nDue to recent lack of computing resources we could not experiment with additional baselines and datasets at this point. We finished some experiments but we feel it would be more comprehensive to release the revision when we have more results.\"}", "{\"metareview\": \"This paper aims to enhance the reasoning abilities of LLMs at the stage of training. It formulates the reasoning process as sampling from latent distribution and optimizes the LLM through a variational framework using the sampled rationales. While the target problem is important and the method is well motivated, main concerns of the reviewers include: 1. discussion on related work is limited; 2. the experiment did not compare with strong baselines, and only two datasets and small models were tested. The authors did not provide formal rebuttal, and instead acknowledge that they \\\"did not conduct enough experiments to demonstrate the effectiveness of our algorithm empirically\\\". AC agrees and suggests the authors revise and improve the submission accordingly.\", \"additional_comments_on_reviewer_discussion\": \"Two main concerns from the reviewers:\\n\\n1. discussion on related work is limited; \\n2. the experiment did not compare with strong baselines, and only two datasets and small models were tested. \\n\\nNo formal rebuttal is provided. Instead, the authors acknowledged \\\"did not conduct enough experiments to demonstrate the effectiveness of our algorithm empirically\\\".\\n\\nI tend to agree with the reviewers that this work requires a more comprehensive and convincing evaluation.\"}", "{\"summary\": \"This paper proposes a principled framework, LaTRO, to treat the reasoning process as sampling from latent distribution and enable LLMs themselves as reward models to evaluate the quality of reasoning rationales. The proposed LaTRO outperforms supervised fine-tuning baselines on 2 datasets.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The structure of this paper is clear, and it's easy to follow the main idea and the contribution.\\n2. The research problem this paper targets is very important to the community.\\n3. The motivation is sound and the benefits of the approach are clear.\", \"weaknesses\": \"1. Although modeling the rationales as latent variables to sample is well defined and proposed, the paper lacks a discussion with previously proposed reasoning methods that formulate the reasoning process as latent variables [1, 2] as well. Even though they are cited in the related work. Specifically, [1] proposes to sample diverse reasoning rationales to improve the prediction performance and also proposes an EM-like algorithm to improve the reward model and LLM alternatively. It would be great to have a discussion and comparison with these methods.\\n2. The paper does not compare with various prompting-based reasoning baselines mentioned in the related work section, such as tree-of-thought[3], and RAP[4], as well as fine-tuning baselines such as STaR[5], which is a missed opportunity to demonstrate its effectiveness. It would be better to compare them with metrics like training / inference computational cost and accuracy.\\n3. The paper does not provide a confidence interval, leading to unawareness of how the proposed LaTRO is robust to the initialization and randomness. It would be great to report the results from at least 3 times repetitions.\\n4. As the proposed LaTRO is a general approach, it would be great to evaluate it on more benchmark datasets to verify its effectiveness, such as HumanEval[6] and MBPP [7], which are popular in the current LLM reasoning community.\\n\\n\\n[1] Hu, Edward J., et al. \\\"Amortizing intractable inference in large language models.\\\" arXiv preprint arXiv:2310.04363 (2023).\\n\\n[2] Hoffman, Matthew Douglas, et al. \\\"Training chain-of-thought via latent-variable inference.\\\" Advances in Neural Information Processing Systems 36 (2024).\\n\\n[3] Yao, Shunyu, et al. \\\"Tree of thoughts: Deliberate problem solving with large language models.\\\" Advances in Neural Information Processing Systems 36 (2024).\\n\\n[4] Hao, Shibo, et al. \\\"Reasoning with language model is planning with world model.\\\" arXiv preprint arXiv:2305.14992 (2023).\\n\\n[5] Zelikman, Eric, et al. \\\"Star: Bootstrapping reasoning with reasoning.\\\" Advances in Neural Information Processing Systems 35 (2022): 15476-15488.\\n\\n[6] Chen, Mark, et al. \\\"Evaluating large language models trained on code.\\\" arXiv preprint arXiv:2107.03374 (2021).\\n\\n[7] Austin, Jacob, et al. \\\"Program synthesis with large language models.\\\" arXiv preprint arXiv:2108.07732 (2021).\", \"questions\": \"See \\\"Weaknesses\\\".\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"This paper proposesLaTRO, a novel framework aimed at enhancing the reasoning capabilities of LLMs. LaTRO addresses the challenge of improving reasoning during the training phase by formulating reasoning as sampling from a latent distribution and optimizing it through variational approaches. This method allows LLMs to self-improve their reasoning process and ability to evaluate the quality of reasoning without external feedback or reward models. The paper validates LaTRO's effectiveness through experiments on GSM8K and ARC-Challenge datasets, demonstrating significant improvements over base models and supervised fine-tuning approaches.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The writing is clear and easy to follow.\", \"The discussed topic and motivation are both innovative and significant.\"], \"weaknesses\": [\"Although I'm not familiar with the topic discussed in this article, I believe the experiments presented are too few and not comprehensive enough. Moreover, the datasets considered are limited to only GSM8K and ARC-Challenge, which lacks persuasiveness.\", \"The number of case study examples is too limited, with only a few instances in Figure 4 and the appendix, which is not convincing.\", \"The proposed method, LaTRO, especially the stability of the Self-reward component, has not been adequately considered.\"], \"questions\": \"see weakness\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}" ] }
4PlbIfmX9o
Graph Assisted Offline-Online Deep Reinforcement Learning for Dynamic Workflow Scheduling
[ "Yifan Yang", "Gang Chen", "Hui Ma", "Cong Zhang", "Zhiguang Cao", "Mengjie Zhang" ]
Dynamic workflow scheduling (DWS) in cloud computing presents substantial challenges due to heterogeneous machine configurations, unpredictable workflow arrivals/patterns, and constantly evolving environments. However, existing research often assumes homogeneous setups and static conditions, limiting flexibility and adaptability in real-world scenarios. In this paper, we propose a novel *Graph assisted Offline-Online Deep Reinforcement Learning* (GOODRL) approach to building an effective and efficient scheduling agent for DWS. Our approach features three key innovations: (1) a *task-specific* graph representation and a *Graph Attention Actor Network* that enable the agent to dynamically assign focused tasks to heterogeneous machines while explicitly considering the future impact of each machine on these tasks; (2) a *system-oriented* graph representation and a *Graph Attention Critic Network* that facilitate efficient processing of new information and understanding its impact on the current state, crucial for managing unpredictable workflow arrivals/patterns in real-time; and (3) an *offline-online* method that utilizes imitation learning for effective offline training and applies gradient control and decoupled high-frequency critic training techniques during online learning to sustain the agent’s robust performance in rapidly changing environments. Experimental results demonstrate that GOODRL significantly outperforms several state-of-the-art algorithms, achieving substantially lower mean flowtime and high adaptability in various online and offline scenarios.
[ "workflow scheduling", "graph attention neural network", "reinforcement learning", "online learning" ]
Accept (Poster)
https://openreview.net/pdf?id=4PlbIfmX9o
https://openreview.net/forum?id=4PlbIfmX9o
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zRqf9fG56s", "ympmkDlrwe", "yL1UTlqezN", "sPtrU9kQZ9", "qXjOJH7lwv", "jx2i1tamfO", "jkZjaW5Fo5", "giQrFVAjrI", "ePJ8u3pStT", "dIxU2mNJRw", "aSHqa1AvAf", "XgZBpt3qSz", "X2VqPwVDu3", "W6G3ZThWFO", "TuQfZlp4y8", "Te32jHY88V", "TZOrJYA5gn", "Ogec9f64hY", "N6gRhQdn58", "N2AiFwwa8m", "HM1IYrwaOZ", "DwT32zlOZu", "D891BknJQ1", "BofPnhShe8", "9ZF0Kv6bnB", "9Dtkh2jEwK", "8d8yp5eZHw", "5NeVwrTtbR", "54rDsuPI4L" ], "note_type": [ "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_review", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732274968960, 1730842290508, 1732113782511, 1730425187307, 1732647858648, 1730719798972, 1733191384642, 1732636817304, 1732074556042, 1734760740494, 1732790694745, 1733191314946, 1730697735183, 1737523628993, 1732731076856, 1733100424013, 1732093552131, 1732105724111, 1732103171107, 1732078874848, 1732177682188, 1732738165733, 1732109557180, 1733126959616, 1732637774950, 1732102725340, 1732100334596, 1732094917526, 1732649164312 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission4258/Authors" ], [ "ICLR.cc/2025/Conference/Submission4258/Reviewer_jCq6" ], [ "ICLR.cc/2025/Conference/Submission4258/Authors" ], [ "ICLR.cc/2025/Conference/Submission4258/Reviewer_RdYS" ], [ "ICLR.cc/2025/Conference/Submission4258/Reviewer_RdYS" ], [ "ICLR.cc/2025/Conference/Submission4258/Reviewer_Eeze" ], [ "ICLR.cc/2025/Conference/Submission4258/Authors" ], [ "ICLR.cc/2025/Conference/Submission4258/Reviewer_8vKM" ], [ "ICLR.cc/2025/Conference/Submission4258/Authors" ], [ "ICLR.cc/2025/Conference/Submission4258/Area_Chair_z3Ps" ], [ "ICLR.cc/2025/Conference/Submission4258/Authors" ], [ "ICLR.cc/2025/Conference/Submission4258/Authors" ], [ "ICLR.cc/2025/Conference/Submission4258/Reviewer_8vKM" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission4258/Reviewer_jCq6" ], [ "ICLR.cc/2025/Conference/Submission4258/Authors" ], [ "ICLR.cc/2025/Conference/Submission4258/Authors" ], [ "ICLR.cc/2025/Conference/Submission4258/Authors" ], [ "ICLR.cc/2025/Conference/Submission4258/Authors" ], [ "ICLR.cc/2025/Conference/Submission4258/Authors" ], [ "ICLR.cc/2025/Conference/Submission4258/Authors" ], [ "ICLR.cc/2025/Conference/Submission4258/Authors" ], [ "ICLR.cc/2025/Conference/Submission4258/Authors" ], [ "ICLR.cc/2025/Conference/Submission4258/Reviewer_Eeze" ], [ "ICLR.cc/2025/Conference/Submission4258/Authors" ], [ "ICLR.cc/2025/Conference/Submission4258/Authors" ], [ "ICLR.cc/2025/Conference/Submission4258/Authors" ], [ "ICLR.cc/2025/Conference/Submission4258/Authors" ], [ "ICLR.cc/2025/Conference/Submission4258/Authors" ] ], "structured_content_str": [ "{\"title\": \"General Response Updated\", \"comment\": \"Dear Reviewers,\\n\\nWe are pleased to inform you that a revised version of our paper has been uploaded to address all your concerns. All modifications have been **highlighted in red** for your convenience. We have made every effort to incorporate your suggestions to the fullest extent possible. We believe these updates satisfactorily resolve the issues outlined, aligning closely with the responses provided in our rebuttals.\\n\\nWe greatly value your feedback and would appreciate any further suggestions for improvement. If you find that our revisions adequately address your concerns, we kindly ask you to consider adjusting your ratings to reflect the updated version of our paper.\\n\\nThank you very much for your time and thoughtful evaluation of our work. We remain open to further clarifications or refinements as requested.\\n\\nThe Authors\"}", "{\"summary\": \"This paper proposes a novel Graph-assisted Offline-Online Deep Reinforcement Learning (GOODRL) approach for Dynamic Workflow Scheduling (DWS) in cloud environments. The authors introduces three main innovations: a task-specific graph representation with a Graph Attention Actor Network for focused task assignments, a system-oriented graph representation with a Graph Attention Critic Network for real-time state evaluation, and a hybrid offline-online RL method to improve adaptability. The offline stage uses imitation learning for stable initial policy development, while the online stage applies advanced PPO techniques for continuous adaptation. Experiments demonstrate GOODRL\\u2019s superiority over state-of-the-art methods in minimizing mean flowtime and enhancing performance in several offline and online settings.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The paper is well-written and easy to follow\\n2. Special designs on the actor and critic network to have more efficient embeddings and long-range interaction modeling\\n3. customized gradient for stabilizing the PPO training for the online settings\\n4. Experiments are conducted on many online and offline settings\", \"weaknesses\": \"This paper presents a comprehensive learning pipeline for addressing the dynamic workflow scheduling (DWS) problem. I appreciate the authors for their efforts in adapting various components to suit the unique challenges of DWS.\\n\\nThe primary concern with this paper is the applicability of the proposed pipeline. Many of the modifications and design choices appear closely tailored to DWS, leaving it unclear how generalizable this approach might be to other scheduling problems, such as flexible job shop scheduling. Can these designs be readily adapted for other problem domains? The paper would be significantly strengthened by demonstrating the pipeline\\u2019s transferability to other scheduling scenarios.\\n\\nSeveral techniques are introduced throughout the pipeline, though not all are rigorously validated in the ablation study. A more thorough investigation into the contributions of each component would enhance our understanding of their individual benefits.\\n\\nOverall, I appreciate the contributions of this work and currently lean toward a borderline acceptance.\\n\\n---\\nI increase my score to 8 in response to the author's rebuttal.\", \"questions\": \"Besides the concerns raised in the weakness part, I have the following additional questions:\\n\\n1. PPO has also been applied to solve other combinatorial optimization problems like routing problems, where the horizons are also very large. Could you give some intuitions why PPO is particularly unstable for this problem?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer RdYS cnt.\", \"comment\": \"**Q2: Practical deployment requirements for GOODRL in terms of computational resources.**\\n\\nThe core computation of GOODRL is powered by a GNN, which operates efficiently on widely available hardware resources. For example, on a single Intel Xeon CPU core (2.8GHz), it makes scheduling decisions in **6-7 ms**. Such **high computational efficiency** ensures its **suitability for real-time decision-making** in large-scale, dynamic environments. \\n\\nHeuristic methods like HEFT may reduce decision times to under 1 ms. However, **this minor reduction in _millisecond-level_ has negligible practical impact**, as the time required for executing tasks or communicating between machines dominates the scheduling process. \\n\\nMore importantly, **simple heuristics lack the flexibility** of GOODRL, which can adapt to changing environments and outperform simple heuristics. GOODRL delivers superior scheduling outcomes while remaining computationally feasible, making it highly **practical for real-world deployment**.\\n\\n**Q3: Insights in handling noisy or incomplete data.**\\n\\nGOODRL requires only basic workflow information, including inter-task dependencies and computation resource requirements. We interpret \\u201cnoisy or incomplete data\\u201d as cases where newly arrived workflows lack precision or completeness:\\n\\n- **Inter-task Dependencies**: These must be precise for a workflow to be processed accurately.\\n\\n- **Computation Resource Requirements**: If noisy, GOODRL may make suboptimal decisions, similar to other methods. Integrating recent **resource prediction** techniques [3-4] may enhance GOODRL's reliability, though this is **beyond the current scope** and will be explored in future work.\\n\\n- **Incomplete Data**: Missing resource requirements make scheduling tasks unfeasible for any scheduler. To mitigate this, users typically provide **estimated** (often pessimistic) **resource needs**, resulting in over-allocation. Advanced machine learning methods [5-6] can improve these estimates, which GOODRL could adopt to handle such cases better.\\n\\nWe appreciate your suggestion to address this important challenge in our future research.\\n\\n**Q4: Extend the proposed approach to support multi-objective optimization, such as cost and energy efficiency.**\\n\\nExtending GOODRL to support multi-objective optimization is feasible and valuable, as explained in response to W4. Key considerations to be covered in the revised paper include:\\n\\n- **Reward Design**: A weighted reward combining flowtime and energy efficiency (or cost) can be used.\\n\\n- **Graph Representation**: Additional features, such as machine prices, scheduling overhead, and QoS requirements, can be integrated into graph nodes.\\n\\n- **Learning Challenges**: Addressing trade-offs between conflicting objectives may require new techniques like Pareto-optimal training [7].\\n\\n**Q5: Scalability and heterogeneity challenges in large cloud environments.**\\n\\nGOODRL has been **successfully** tested on **large DWS** problems with up to 600,000 tasks, demonstrating its scalability in this aspect. \\n\\nWhile we currently assume a fixed VM collection, expanding this would increase the size of the graph-based state representation. However, prior research showed that GNNs can scale to process graphs with millions of nodes [8-9].\\n\\nTo process large graphs efficiently, we can **restrict the VMs considered for each task** by using heuristic rules or machine learning models to **pre-select suitable VMs for graph construction**. Such approaches will be explored in future work to be discussed in the revised paper. We also note that managing extremely large resource sets is not our current focus and remains an open challenge in the field.\\n\\nGOODRL is designed to work with **heterogeneous machine configurations** (e.g., VMs provided by Amazon EC2 in our experiments), which are captured by several machine features in the graph representation. Experiments that further evaluate the impact of resource heterogeneity on scheduling performance will be covered in our discussion of future research.\\n\\n---\\n[3] Ullah et. al. (2023). Intelligent time-series forecasting framework for non-linear dynamic workload and resource prediction in cloud. _Computer Networks_.\\n\\n[4] Nawrocki et. al.. (2023). Data-driven adaptive prediction of cloud resource usage. _Journal of Grid Computing_.\\n\\n[5] Dogani et. al.. (2023). Multivariate workload and resource prediction in cloud computing using CNN and GRU by attention mechanism. _The Journal of Supercomputing_.\\n\\n[6] Jia et. al.. (2024). DuCFF: A dual-channel feature-fusion network for workload prediction in a cloud infrastructure. _Electronics_.\\n\\n[7] Lin et. al. (2022). Pareto set learning for expensive multi-objective optimization. In _NeurIPS.\\n\\n[8] Chiang et. al. (2019). Cluster-gcn: An efficient algorithm for training deep and large graph convolutional networks. In _KDD_.\\n\\n[9] Wu et. al. (2022). Nodeformer: A scalable graph structure learning transformer for node classification. In _NeurIPS_.\"}", "{\"summary\": \"The paper introduces an innovative approach, GOODRL, for handling dynamic workflow scheduling in cloud computing environments. This method integrates a task-specific graph representation with Graph Attention Networks (GATs) for actor-critic networks and incorporates offline imitation learning alongside online reinforcement learning to adapt to changing conditions. The proposed system is evaluated in various online and offline scenarios and is shown to outperform existing state-of-the-art methods in terms of mean flowtime and adaptability.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The paper proposes a unique combination of graph representations tailored for actor and critic networks, enhancing action differentiation and value estimation.\", \"GOODRL demonstrates improvements in mean flowtime over established baseline algorithms, showcasing robust performance in both offline and online settings.\", \"The offline-online learning approach with imitation learning and gradient control addresses challenges in adapting to dynamic environments, adding practical value.\", \"The ablation studies and performance comparisons are thorough, providing strong evidence for the contributions and architectural decisions.\"], \"weaknesses\": [\"The paper's reliance on simulations limits its generalizability to real-world cloud environments. Practical tests in real cloud data centers would bolster the validity of the results.\", \"The experiments primarily focus on a specific set of workflow types and machine configurations, potentially limiting the applicability of findings to other types of DWS problems.\", \"The computational overhead associated with the proposed GAT-based architectures is not discussed in detail, raising questions about deployment feasibility in large-scale, real-time applications.\", \"While the method performs well in flowtime reduction, other practical objectives, such as energy efficiency and cost, are not explored, which would be valuable for broader applicability.\", \"The paper lacks discussion on how the model generalizes to varied workloads, impacting its robustness in dynamic cloud environments.\"], \"questions\": [\"Could the authors elaborate on how their model adapts to significant changes in workflow patterns or cloud configurations without extensive retraining? How does the method maintain robust performance under dynamic conditions that differ from the training data?\", \"What are the practical deployment requirements for GOODRL in terms of computational resources, and how do they compare with simpler heuristic-based solutions in large-scale, real-time applications?\", \"Can the authors provide more insights into how the method handles noisy or incomplete data, which is a common challenge in real-world cloud scheduling environments?\", \"How might the proposed approach be extended or adapted to incorporate multi-objective optimization, such as balancing energy efficiency with flowtime reduction, and what specific challenges would need to be addressed to achieve this?\", \"Could the authors comment on potential scalability issues when deploying GOODRL in larger cloud infrastructures or in environments with highly heterogeneous machine configurations, and how these challenges could be mitigated?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for the detailed responses, particularly on scalability, multi-objective optimization, and computational overhead. The explanations mostly addressed my concerns. I have revised my score in response.\"}", "{\"summary\": \"Prior works often consider homogeneous setups and static conditions, and fails to consider the dynamic nature of the workflow scheduling problem. To this end, the paper proposes a GNN-based DRL approach with an offline stage as well as an online stage.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The paper has many diagrams, which help the readers to understand.\", \"GOODRL shows strong performance against other baselines.\"], \"weaknesses\": [\"My main concern is that the paper seems to be applying some existing algorithms to a scheduling problem. There are some simple modifications at different parts of the overall method, and ended up giving us good performance. However, what are the broader insights of this work?\", \"Consider adding more background on the DWS problem and studies on real traces.\"], \"questions\": \"1. In a real-world data cluster, how does the arrival pattern change over time? Consider plotting the 95%-99% quantile of the number of simultaneous arrivals. Please also consider adding other more background information.\\n2. How long does the model take to make a decision at inference time, compared to prior approaches?\\n3. It seems that in Table 1 and 2, the arrival rates during training and testing are always the same for each scenario. However, in a real-world data center, the arrival pattern might fluctuate over time, especially in the case of extreme events (e.g., holidays or deadlines). How robust is your approach to such distribution shifts?\\n4. In Table 1 and 2, \\\"Ours-Offline\\\" already achieves a very good performance. If due to distribution shifts, the offline version gets a much lower performance, can the online algorithm quickly adapt to such changes?\\n5. Many real-world scenarios involve some type of resource contention or performance interference. For example, two tasks are both memory intensive, so maybe we should allocate them on different machines. How does GOODRL address this issue?\", \"minor\": [\"Line 45, \\\"In fact existing\\\" --> \\\"In fact, existing\\\"\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Further Response to Reviewer Eeze - [2/2]\", \"comment\": \"**3. Reaffirming the novelty and significance of our contributions**\\nSimilar to other L2O works published in the top-tier ML conferences, while our work builds on existing ML techniques such as GAT and RL, it makes **new contributions to the ML community**, particularly in the L2O domain:\\n| **Challenges** | **Existing Limitations** | **Our Newly Proposed Techniques** |\\n|-|-|-|\\n| **Capturing changes in dynamic environments** | 1) Rely on a **static graph**, i.e., the number of nodes in graphs is constant as all information is known. 2) **Unable** to capture complex and dynamic relationships between workflows and machines. | **Dynamic graph representation with GAT for task structures.** To our knowledge, this type of dynamic modeling **has not been explored** in prior L2O works for scheduling. |\\n| **Solving RL stability for large-scale problems** | 1) Use a **shared** or **only** an actor encoder. 2) **Neglecting** the critic's role in AC-based RL stability for large-scale problems (see our response Q1 to Reviewer jCq6 for details). | **Separate encoders for actor and critic** can significantly **enhance learning reliability and scalability** for solving COPs, including but not limited to scheduling. |\\n| **Adapting to unpredictable future changes** | 1) Perform **offline learning** using existing methods. 2) **Unable** to continuously learn in the face of future environmental changes. | **Offline imitation learning with online adaptation.** To our knowledge, this framework **has not been applied** in the L2O domain, providing a novel and valuable contribution to L2O research. |\\n\\n**Table R7. Limitations of existing works and our novelties.**\\n\\n**4. Relevance to the ICLR community** \\nAs the **ICLR official account highlighted last week** (https://x.com/iclr_conf), ``_Does this paper take a gradient step in a promising direction? Is the community better off with this paper published? If the answer is yes, then the recommendation should be to accept_.\\u2019\\u2019\\n- **X1\\uff1aYes**, our research takes **a positive step in a promising direction** by addressing challenging Dynamic Workflow Scheduling (DWS) problems through the introduction of dynamic graph representations, separately designed and trained actor and critic networks, and a hybrid offline-online learning framework. These innovations enhance scalability, stability, and adaptability, **pushing the boundaries of L2O methods** and **paving the way for broader real-world applications**.\\n- **X2: Yes**, publishing this paper will **greatly benefit the ICLR community** by introducing innovative ML techniques that are not only applicable to DWS but **also transferable to other frequently studied problems**, such as FJSS (see Appendix N) and multi-objective optimization (see Appendix O). Our in-depth study provides valuable insights into reliable online learning methods, advanced state representations for dynamic environments, and novel actor-critic network architectures (see Appendix F and G) that **scale effectively to large problem sizes**. These contributions establish **a solid foundation** for advancing both algorithmic research and practical applications in the L2O field.\\n\\nWe hope this response clarifies the broader relevance and novelty of our work. Would you please reconsider your evaluation in light of this discussion? Thank you again for your constructive feedback and consideration.\\n\\nBest regards, \\nPaper 4258 Authors\\n\\n---\\n[8] Zhang, C., Song, W., Cao, Z., Zhang, J., Tan, P. S., & Chi, X. (2020). Learning to dispatch for job shop scheduling via deep reinforcement learning. In _NeurIPS_.\\n\\n[9] Zhang, C., Cao, Z., Song, W., Wu, Y., & Zhang, J. (2024). Deep reinforcement learning guided improvement heuristic for job shop scheduling. In _ICLR_.\\n\\n[10] Corsini, A., Porrello, A., Calderara, S., & Dell'Amico, M. (2024). Self-labeling the job shop scheduling problem. In _NeurIPS_.\"}", "{\"comment\": \"I would like to thank the authors for their detailed response to the questions posed, especially clarifying the challenges unique to their domain and detailed explanation of how they model their Graph Neural Network features. I have revised my scores in response.\"}", "{\"title\": \"Response to Reviewer jCq6\", \"comment\": \"We sincerely thank your positive feedback and valuable suggestions on our work. We hope our following response can address your concerns. All additional experiments and discussions will be included in the revised paper soon.\\n\\n**W1: Can these designs be readily adapted for other problem domains, such as flexible job shop scheduling?**\\n\\nWe confirm that our designs **can be readily adapted** for other scheduling problems, such as Flexible Job Shop Scheduling (FJSS) problems. To address your concern, we conducted **additional experiments** on FJSS from [1], as it is a strong representative and was mentioned by (Reviewer 8vKM). Table R1 showd that our method achieved highly **competitive** performance, demonstrating the **transferability** of our pipeline (i.e., GOODRL). \\n\\n| Size | MOR | SPT | FIFO | MWKR | DRL-G | DRL-S | Ours |\\n|-------------|--------|--------|--------|--------|--------|--------|--------|\\n| 10\\u00d75 | 116.69 | 129.06 | 119.62 | 115.29 | 111.67 | **105.61** | 112.57 |\\n| 20\\u00d75 | 217.17 | 229.89 | 216.13 | 216.98 | 211.22 | 207.50 | **202.38** |\\n| 30\\u00d710 | 320.18 | 347.40 | 328.50 | 319.89 | 313.04 | 312.20 | **304.63** |\\n| 40\\u00d710 | 425.19 | 443.30 | 427.22 | 425.70 | 416.18 | 415.15 | **395.70** |\\n\\n**Table R1. Results on FJSS instance with different size.**\\n\\nIn our experiments, all sequence-structured jobs within a FJSS instance are _represented jointly as a workflow_, enabling our pipeline to handle FJSS effectively as a special case of our workflow scheduling problem. \\n\\n**W2: Several techniques are introduced in the pipeline, but not all are rigorously validated in the ablation study.**\\n\\nTo address your concern, we conducted comprehensive ablation studies to validate the effectiveness of the key techniques introduced in our pipeline. Ablation experiments cover **three aspects**: \\n\\n1. **Actor Network**: Table R2 validated the **task-specific embedding module** (TSEM) in our actor network introduced in Subsection 4.2.1. Our architecture with pairwise processing and focused task embedding (**Ours-TSEM**) outperformed the baselines (TSEM w/o. pair and TSEM w. mean), achieving the **lowest** cross-entropy loss. This highlights the importance of _separating task-machine pairs_ and avoiding mean pooling, adequately prioritizing critical task-specific information for decision-making.\\n| Actor Architecture | 100-th | 200-th | 300-th | 400-th | 500-th |\\n|--------------------|---------|---------|---------|---------|---------|\\n| Ours-TSEM | 2.7486 | **2.7106** | **2.6881** | **2.6647** | **2.6498** |\\n| TSEM w/o. pair | 3.1707 | 3.1597 | 3.1538 | 3.1468 | 3.1435 |\\n| TSEM w. mean | **2.7099** | 2.7209 | 2.7152 | 2.6659 | 2.7109 |\\n\\n **Table R2. Cross-entropy loss at different iterations.**\\n\\n2. **Critic Network**: Table R3 validated the **system-oriented embedding module** (SOEM) in our critic network introduced in Subsection 4.2.2. **Our-SOME** clearly **outperformed** variants like SOEM w/o. edge (remove bi-directional and additional task-machine edges) and SOEM w/o. self (remove self-attention layers) in value loss. These results highlight the importance of _comprehensive context awareness_ and _long-range interaction_ modeling developed by us.\\n| Critic Architecture | 100-th | 200-th | 300-th | 400-th | 500-th |\\n|---------------------|-----------|-----------|-----------|-----------|-----------|\\n| Ours-SOEM | **16.3971** | 14.0938 | **10.4907** | **9.5811** | **7.8581** |\\n| SOEM w/o. edge | 17.3012 | **13.4737** | 11.6626 | 9.8066 | 8.8853 |\\n| SOEM w/o. self | 20.6114 | 16.1826 | 14.6813 | 12.6997 | 12.0733 |\\n\\n **Table R3. Mean relative error between return and state value at different iterations.**\\n\\n3. **Online Learning**: Table R4 validated two techniques for online learning introduced in Subsection 4.3.2. **Ours-Online** achieved **superior** online performance improvement compared to Online w/o. grad. (remove gradient control) and Online w/o. freq. (remove high-frequency critic updates). These results demonstrate the effectiveness of both techniques in stabilizing and enhancing online learning.\\n| Training Method | 150-th | 175-th | 200-th | 225-th | 250-th |\\n|-------------------|---------|---------|---------|---------|---------|\\n| Ours-Online | **1.62%** | **1.50%** | **1.57%** | **1.52%** | **1.52%** |\\n| Online w/o. grad. | -1.18% | -1.08% | -1.24% | -1.36% | -1.64% |\\n| Online w/o. freq. | -184.80%| -261.27%| -283.93%| -336.86%| -382.54%|\\n\\n **Table R4. Improvement in mean flowtime compared to Ours-Offline.**\\n\\nWe believe our ablations studies (see Appendices F, G and L) provide rigorous validation suggested by the reviewer. \\n\\n---\\n[1] Song, W., et al. (2022). Flexible job-shop scheduling via graph neural network and deep reinforcement learning. _IEEE Transactions on Industrial Informatics_, 19(2), 1600-1610.\"}", "{\"metareview\": \"This paper presents an approach for dynamic workflow scheduling in cloud environments. The method combines task-specific graph representations with graph attention networks for actor-critic networks and integrates offline imitation learning with online reinforcement learning. The reviewers acknowledged the paper's strong empirical results, clear presentation, and practical value. While initial concerns were raised about technical novelty, experiments across varied workloads, and computational overhead, the authors provided comprehensive responses including additional experiments on multi-objective optimization, scalability analysis, and detailed comparisons with existing approaches.\", \"additional_comments_on_reviewer_discussion\": \"Initially, key concerns centered on the method's novelty compared to existing scheduling approaches, its practical applicability, and computational requirements. The authors provided responses including new experimental results. This led to constructive dialogue with some reviewers explicitly raising their scores based on the thorough responses.\"}", "{\"title\": \"Kindly request feedback from Reviewer Eeze\", \"comment\": \"Dear Reviewer Eeze,\\n\\nWe sincerely thank you for your time and effort in reviewing our work. As the discussion phase soon approaches to its end, we would greatly appreciate it if you could kindly provide feedback on our responses. We are eager to engage in further discussion with you and are open to new suggestions.\\n\\nWe would like to provide more explanations to address your concerns regarding **our work's differences from existing methods** and the **problem background**, with all the relevant changes _highlighted in red_ in the revised paper. Key explanations are summarized below for your convenience.\\n\\n1. To address the unique challenges posed by **large-scale dynamic** workflow scheduling (DWS) problems, our work introduces **three significant modifications** that set it apart from existing studies, including:\\n\\n - **Graph representations**: Many existing scheduling methods were designed to solve *small-scale* problems with *fixed graph structures* (i.e., the number of nodes in the graph-based state representation remains fixed). In contrast, our approach utilizes **dynamic graph representations** to effectively capture the **time-varying relationships** among completed, ongoing, and newly arrived workflows, while simultaneously tracking **real-time machine status**. This design ensures a comprehensive and up-to-date view of the scheduling environment.\\n - **Network architectures**: In most prior studies, the actor and critic share the *same feature extraction layers* and rely on the *same state input*. Instead, we propose a new actor-critic architecture that allows the actor and the critic to **process different state representations**. In this way, the actor is tailored to distinguishing important actions. The critic focuses on processing the state information at the global scale. The effectiveness of this new architecture and its advantageous over other competing approaches have been **verified experimentally** on a range of large and dynamic scheduling scenarios.\\n - **Training methods**: Unlike many previous works that *apply existing RL algorithms without problem-specific modifications*, we propose a **novel offline-online learning** method to achieve reliable online improvement of the actor during the daily operation of the scheduler, significantly enhancing the actor's adaptability and performance on large and dynamic scheduling problems.\\n\\n2. Following your advice, we further revised our paper to **strengthen the background discussion** regarding the DWS problem. We clearly **highlighted the key focus** of DWS, which is to assign a long sequence of interdependent tasks to heterogeneous virtual machines, driven by the aim to optimize the _mean flowtime_ across all workflows or other objectives such as the cost (see Appendix O of the revised paper for new experiment results).\\nThere are some **key assumptions** associated with the DWS problems, including: \\n - The pattern of each workflow (i.e., DAG structure) is unknown until it reaches the system.\\n - Tasks within a workflow can be allocated to any machine, with processing times varying according to machine speeds.\\n - Each machine can process only one task at a time.\\n - Only tasks with all predecessors completed are eligible for scheduling.\\n\\n All these assumptions **match closely** with numerous real-world applications. We have provided more background information in Appendix B and I of the revised paper.\\n\\nBest regards, \\nPaper 4258 Authors\"}", "{\"title\": \"Further Response to Reviewer Eeze - [1/2]\", \"comment\": \"Dear Reviewer Eeze,\\n\\nThank you for reviewing our updated manuscript and recognizing the comprehensiveness of our experiments and the clarity of our writing. We truly appreciate your feedback.\\n\\nWe would like to provide response to your concerns about the applicability of our scheduling problems, the relevance to ML-focused conferences, the novelty of our technical contributions, and its relevance to the ICLR community.\\n\\n**1. The scheduling problem is widely applicable** \\nScheduling is a fundamental combinatorial optimization problem (COP) with **wide-ranging applications**, including but not limited to cloud computing, manufacturing, maritime, aviation, and logistics. **Its importance is well-recognized in the ML community**, as evidenced by prior works published at the **top-tier ML conferences**:\\n| | **Influences** | **Graph Representations** | **Architectures** | **Training Methods** | **Problem Scales** | \\n|-|-|-|-|-|-|\\n| **[8] (NeurIPS 2020)** | Has been **cited 378 times** | Static disjunctive graphs | GIN; Shared encoder | Unmodified PPO | \\u22642,000 tasks |\\n| **[9] (ICLR 2024)** | **Highly rated** work (8,8,8,6 ratings) | Same as [8] | [8] + GAT; Only actor encoder | Unmodified REINFORCE | \\u22642,000 tasks |\\n| **[10] (NeurIPS 2024)** | **Quickly cited** by 9 papers | Same as [8] | [9] + attention layers; Only actor encoder | Self-supervised learning | \\u22642,000 tasks |\\n| **Our Work** | ---- | **New dynamic** graph | GAT+ attention layers; **Separate** encoders | **New** offline-online PPO | \\u2264**600,000** tasks |\\n\\n**Table R6. Summary of prior works in Learning-to-Optimize for scheduling problems.** \\n\\nDifferent from the three successful studies summarized above, **our work introduces innovations in all aspects** of graph representation, architectures, training methods, and problem scale. Additionally, our work clearly demonstrates the **wide applicability** of GOODRL, extending beyond DWS to FJSS (see our response W1 to Reviewer jCq6) as well as multi-objective problems (e.g., joint optimization of mean flowtime and VM rental cost, see our response W4 to Reviewer RdYS), showcasing **strong generalization capability**. This aligns closely with the **growing trend in the ML community** of applying deep learning techniques to increasingly large and dynamic scheduling problems, advancing **both methodology and real-world applications**.\\n\\n**2. ML research demands for both new techniques and tailored applications** \\nWe acknowledge that many _**Learning-to-Optimize**_ (**L2O**) studies, such as [8], [9], and [10], successfully **leveraged existing ML techniques** like GIN, GAT, and PPO **in innovative ways**. While these techniques are not entirely new to the broader ML field, _their adaptation to specific scheduling problems **provides valuable insights** and **lays the groundwork** for advancing ML applications_. Similarly, our approach makes a **significant contribution to the L2O field** by creatively leveraging and enhancing existing ML techniques (see point 3 below) to tackle **significantly larger** and **highly dynamic** scheduling problems with substantial practical importance. \\nIn addition, the **ICLR website** (https://iclr.cc/) mentions that ICLR is globally renowned for presenting and publishing cutting-edge research on all aspects of DL used in the fields of AI, statistics, and data science, **as well as important application** areas. The **relevant topics** discussed at the conference **also include applications** in fields such as audio, speech, robotics, neuroscience, biology, and others.\"}", "{\"summary\": \"The paper presents Graph assisted Offline-Online Deep Reinforcement Learning (GOODRL) for Dynamic Workflow Scheduling for Cloud Computing. The presence of heterogenous configurations, dynamic arrival of workflows, and constant evolving environment makes this a challenging problem for State of the Art Models.\", \"the_contributions_presented_in_the_paper_are\": \"1) A Task-Specific Graph Representation and Graph Attention Actor Model that dynamically assign focused tasks to heterogenous machines.\\n2) Explicit Consideration of the future impact of the crucial state.\\n3) A combination of Offline Imitation Learning followed by Online PPO.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"1\", \"strengths\": [\"Clear and detailed explanation of the approach being used.\", \"Easy to understand figures.\", \"Compared against Multiple Benchmarks and show that the model outperforms the baselines in most instances\"], \"weaknesses\": \"- The topic of Resource Opimization using Graph Neural Networks is an open problem that has applications not limited to Cloud Computing. The problem itself is also explored under Multiple Travelling Salesman Problems [1, 2], Vehicle Routing Problem [3], Job Shop Scheduling [4] and Task Allocation and Scheduling in Multi-Agent Systems [5, 6]. While the application of the problem into Cloud Computing is novel, the use of Reinforcement Learning and Graph Attention Networks to similar optimization problems exists.\\n\\n- It is unclear how the proposed method differs from Online Predictive Scheduling using Heterogenous Graph Attention presented in Wang et al 2022 [7]:\\n\\n- The enhancement provided by the Online RL part of the model is unclear. The experimental results show that the Offline Learning allows for the model to be within 2% of the Online Training results. The significance of this improvement is unclear and needs to be discussed clearly.\\n \\n[1] Yujiao Hu, Yuan Yao, and Wee Sun Lee. 2020. A reinforcement learning approach for optimizing multiple traveling salesman problems over graphs. Knowledge-Based Systems 204 (Sept. 2020), 106244. https://doi.org/10.1016/j.knosys.2020. 106244\\n\\n[2] Yujiao Hu, Zhen Zhang, Yuan Yao, Xingpeng Huyan, Xingshe Zhou, and Wee Sun Lee. 2021. A bidirectional graph neural network for traveling salesman problems on arbitrary symmetric graphs. Engineering Applications of Artificial Intelligence 97 (Jan. 2021), 104061. https://doi.org/10.1016/j.engappai.2020.104061\\n\\n[3] Steve Paul and Souma Chowdhury. 2022. A scalable graph learning approach to capacitated vehicle routing problem using capsule networks and attention mechanism, Vol. 86236. American Society of Mechanical Engineers, V03BT03A045\\n\\n[4] Song, Wen, et al. \\\"Flexible job-shop scheduling via graph neural network and deep reinforcement learning.\\\" _IEEE Transactions on Industrial Informatics_ 19.2 (2022): 1600-1610.\\n\\n[5] Z. Wang and M. Gombolay, \\\"Learning Scheduling Policies for Multi-Robot Coordination With Graph Attention Networks,\\\" in IEEE Robotics and Automation Letters, vol. 5, no. 3, pp. 4509-4516, July 2020, doi: 10.1109/LRA.2020.3002198.\\n\\n[6] B. Altundas, Z. Wang, J. Bishop and M. Gombolay, \\\"Learning Coordination Policies over Heterogeneous Graphs for Human-Robot Teams via Recurrent Neural Schedule Propagation,\\\" _2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)_, Kyoto, Japan, 2022, pp. 11679-11686, doi: 10.1109/IROS47612.2022.9981748.\\n\\n[7] Wang, Z., & Gombolay, M. (2022). Stochastic Resource Optimization over Heterogeneous Graph Neural Networks for Failure-Predictive Maintenance Scheduling. _Proceedings of the International Conference on Automated Planning and Scheduling_, _32_(1), 527-536. https://doi.org/10.1609/icaps.v32i1.19839\", \"questions\": [\"How does this model differ from the model presented in Wang et al 2022 [7]?\", \"How does the heterogeneity of the agent-task is accounted for in the graph representation?\", \"It is unclear what the novelty of this work is compared to similar works published in Multi-Agent Coordination, and Task Allocation and Scheduling Domains.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"comment\": \"Well done on the rebuttal! My concerns have been well-addressed. This paper has been strengthened with all these additional experiments. I believe this paper is above \\\"borderline acceptance\\\" and deserves an \\\"acceptance\\\". However, ICLR does not offer the choice of a score of 7. For now, I increase my score to 8.\"}", "{\"title\": \"Kindly ask for feedback from Reviewer Eeze\", \"comment\": \"Dear Reviewer Eeze,\\n\\nAs the discussion phase is nearing its conclusion, we kindly ask if you could provide feedback on our responses and revisions. If you find that we have satisfactorily addressed your concerns, we would greatly appreciate your consideration of adjusting your rating to reflect the improvements made.\\n\\nWe sincerely thank you for your time and thoughtful review.\\n\\nBest regards, \\nPaper 4258 Authors\"}", "{\"title\": \"Response to Reviewer Eeze\", \"comment\": \"We thank the reviewer for their thoughtful comments and will address these concerns in the revised paper.\\n\\n**W1: The broader insights with respect to simple algorithm modifications.**\\n\\nOur work addresses **large-scale dynamic** scheduling problems with **up to 600,000 tasks**, **heterogeneous** machines, and **unpredictable** workflow arrivals, demanding for major modifications to graph representations, network architectures, and training methods.\\n\\nBy tackling such challenging scheduling problems, we derive broader insights in the following aspects\\uff1a\\n- The graph-based input to _the actor and critic networks should be clearly **separated**_ to effectively address large-scale, dynamic scheduling problems. This separation ensures a balance between global information capture and computational efficiency, particularly for the actor.\\n- The actor and critic networks _must be designed to meet their **specific needs** in PPO_. The actor differentiates actions for the target task, while the critic focuses on global information. Therefore, they should be trained separately rather than jointly, as in previous works.\\n- Actor-critic algorithms like PPO can _become **unstable** in large-scale, dynamic scheduling problems_. Its learning stability can be noticeably improved through gradient control and high-frequency independent training of the critic. \\n- Our experiments in Table R1 indicate that GOODRL, while designed for DWS, also _performs competitively in other scheduling problems_ like FJSS [1], demonstrating its transferability.\\n| Size | MOR | SPT | FIFO | MWKR | DRL-G | DRL-S | Ours |\\n|-------------|--------|--------|--------|--------|--------|--------|--------|\\n| 10\\u00d75 | 116.69 | 129.06 | 119.62 | 115.29 | 111.67 | **105.61** | 112.57 |\\n| 20\\u00d75 | 217.17 | 229.89 | 216.13 | 216.98 | 211.22 | 207.50 | **202.38** |\\n| 30\\u00d710 | 320.18 | 347.40 | 328.50 | 319.89 | 313.04 | 312.20 | **304.63** |\\n| 40\\u00d710 | 425.19 | 443.30 | 427.22 | 425.70 | 416.18 | 415.15 | **395.70** |\\n\\n **Table R1. Results on FJSS instances with different sizes.**\\n\\n**W2: Consider adding more background on the DWS problem and studies on real traces.**\\n\\nWe will incorporate more background information on the DWS problem into the revised paper. \\n\\nFor real-world traces, existing studies have explored three main aspects, which are also considered by us:\\n- In line with many existing studies [2-5], we focused on popular **scientific workflows** like CyberShake, Montage, and SIPHT (https://download.pegasus.isi.edu/misc/SyntheticWorkflows.tar.gz) in our experiments;\\n- We adopted **real-world resource configurations** from major cloud providers like Amazon EC2;\\n- Following SOTA research [3-7], we **simulated workflow arrivals** using Poisson distributions under a wide range of arrival rates.\\n\\nWe will update the manuscript to reflect this grounding in the literature and its relevance to practical scenarios.\\n\\nTo the best of our knowledge, **no real-world workflow arrival patterns** are referenced or used in existing studies. If such traces exist, they have not been accessible to us. We welcome any suggestions from the reviewer regarding this.\\n\\n---\\n[1] Song, W., Chen, X., Li, Q., \\\\& Cao, Z. (2022). Flexible job-shop scheduling via graph neural network and deep reinforcement learning. _IEEE Transactions on Industrial Informatics_.\\n\\n[2] Deelman, E., Vahi, K., Juve, G., Rynge, M., Callaghan, S., Maechling, P.J., Mayani, R., Chen, W., Da Silva, R.F., Livny, M. \\\\& Wenger, K. (2015). Pegasus, a workflow management system for science automation. _Future Generation Computer Systems_.\\n\\n[3] Xie, Y., Wang, X. Y., Shen, Z. J., Sheng, Y. H., \\\\& Wu, G. X. (2023). A two-stage estimation of distribution algorithm with heuristics for energy-aware cloud workflow scheduling. _IEEE Transactions on Services Computing_.\\n\\n[4] Qin, S., Pi, D., Shao, Z., Xu, Y., \\\\& Chen, Y. (2023). Reliability-aware multi-objective memetic algorithm for workflow scheduling problem in multi-cloud system. _IEEE Transactions on Parallel and Distributed Systems_.\\n\\n[5] Sun, Z., Mei, Y., Zhang, F., Huang, H., Gu, C., \\\\& Zhang, M. (2024). Multi-Tree Genetic Programming Hyper-Heuristic for Dynamic Flexible Workflow Scheduling in Multi-Clouds. _IEEE Transactions on Services Computing_.\\n\\n[6] Wang, S., Li, X., \\\\& Ruiz, R. (2019). Performance analysis for heterogeneous cloud servers using queueing theory. _IEEE Transactions on Computers_.\\n\\n[7] Gu, C., Li, Z., Huang, H., \\\\& Jia, X. (2018). Energy efficient scheduling of servers with multi-sleep modes for cloud data center. _IEEE Transactions on Cloud Computing_.\"}", "{\"title\": \"Response to Reviewer RdYS\", \"comment\": \"We thank the reviewer for recognizing the significance of our work. We value your feedback and hope to address your concerns by following responses.\\n\\n**W1: Practical tests in real data centers would bolster the validity of the results.**\\n\\nWe agree that testing in real data centers would offer valuable validation of our approach. However, such experiments are **highly costly** and require **substantial investment in human labor**, making them **impractical** for us at this stage. This limitation applies to **most related studies** . We appreciate your suggestion and will consider pursuing real-world testing in **future research if sufficient funding and resources become available**.\\n\\n_Simulations remain the standard and mainstream methodology in this field._ For example, the **CloudSim simulator** [1] introduced by a world-leading cloud computing research group has **been cited 6,449 times** to date. We believe simulations do not significantly limit our approach's generalizability, as they are **carefully designed to mimic real-world cloud dynamics** like unpredictable workflow arrivals, resource heterogeneity, and dynamic operating conditions. Benchmarking against SOTA methods demonstrates our approach's robustness, indicating it would perform well in real-world environments.\\n\\n**W2: Experiments on specific workflows and configurations may limit applicability to other DWS problems.**\\n\\nRegarding specific workflows, we considered all the popularly studied scientific workflows [2] in our experiments, ensuring a fair comparison with baseline methods. We believe that _focusing on these workflows does not limit the applicability of our findings_ but instead **ensures a robust and meaningful evaluation** of our approach within a common experimental framework. \\n\\nRegarding machine configurations, we used **real-world machine configurations** from Amazon EC2, the largest global cloud provider, reflecting practical settings in cloud environments. While experiments with other providers could enhance generalizability, we believe _using EC2 configurations does not limit the applicability_. We appreciate the reviewer\\u2019s suggestion and will consider incorporating such experiments in future research.\\n\\nNotably, our experiments address **much larger** problem instances (up to 600,000 decision points) than many existing studies. Moreover, considered dynamic conditions **align with real-world cloud** scenarios. This scale highlights the broad applicability of our approach, and our strong performance under these challenging conditions supports its relevance to dynamic workflow scheduling research.\\n\\n**W3: The computational overhead associated with the proposed method.**\\n\\nWe will update the manuscript to include a discussion of the computational costs of our method. In Table R1, we compared the decision-making time of three types of architectures on a single CPU core. While our GAT-based approach has a higher decision time than GPHH and ESRL, it remains within a reasonable **6-7 millisecond** range, **comparable to cloud data transmission latency, making it suitable for real-time applications**.\\n| Scenarios | GPHH | ESRL | Ours |\\n|------------------------------|--------|--------|--------|\\n| \\u27e85\\u00d75,5.4,1k\\u27e9 | 0.7 ms | 2.6 ms | 6.1 ms |\\n| \\u27e85\\u00d75,9,1k\\u27e9 | 1.0 ms | 2.7 ms | 7.6 ms |\\n| \\u27e86\\u00d74,5.4,1k\\u27e9 | 0.6 ms | 2.7 ms | 6 ms |\\n| \\u27e86\\u00d74,9,1k\\u27e9 | 0.7 ms | 2.5 ms | 6.8 ms |\\n\\n**Table R1. The computational overhead to make a decision.**\\n\\n---\\n[1] Calheiros, R. N., Ranjan, R., Beloglazov, A., De Rose, C. A., \\\\& Buyya, R. (2011). CloudSim: a toolkit for modeling and simulation of cloud computing environments and evaluation of resource provisioning algorithms. _Software: Practice and experience.\\n\\n[2] Deelman, E., Vahi, K., Juve, G., Rynge, M., Callaghan, S., Maechling, P.J., Mayani, R., Chen, W., Da Silva, R.F., Livny, M. \\\\& Wenger, K. (2015). Pegasus, a workflow management system for science automation. _Future Generation Computer Systems_.\"}", "{\"title\": \"Response to Reviewer 8vKM cnt.\", \"comment\": \"**Q1: The difference from [7].**\\n\\nPlease refer to the response of W2.\\n\\n**Q2: Accounting for agent-task heterogeneity in the graph representation.**\", \"we_account_for_agent_task_heterogeneity_in_the_graph_representation_through_the_following\": \"1. **Raw Features**: Each task node includes static features (e.g., task workload) and **dynamic features** (e.g., execution time, machine utilization) that are **updated** at each decision step to capture task and machine heterogeneity (Appendix C).\\n\\n2. **Edges**:\\n - Edges that model workflow-specific task execution order.\\n - Edges that model machine-specific task execution order.\\n\\n3. **Dynamic Graph Structure**: The graph evolves its structure in real-time as tasks are completed, new tasks arrive, and machine states change in a dynamic environment.\\n\\n**Q3: Novelty of this work compared to prior research.**\\n\\nThe novelty of our work lies in the following key aspects compared to existing methods [5-6]:\\n\\n1. **Dynamic and evolving environments**: Our graph representation and architecture effectively handle **dynamic** workflow arrivals, **heterogeneous** machines, and **evolving** task dependencies, making it highly suited for cloud computing environments.\\n\\n2. **Integrated offline and online learning**: We combine **offline and online RL** seamlessly, enabling _rapid adaptation to large-scale, dynamic problems without unnecessary retraining_.\\n\\n3. **Scalability and extensibility**: Our method handles **extremely large** problems (up to 600,000 tasks) and achieves SOTA performance on related scheduling problems like FJSS [4]. New experiments to be included in the revised paper also demonstrate its ability to **support multi-objective** DWS (Please refer to response W4 to Reviewer RdYS).\\n\\nThese innovations address **complex, large-scale, dynamic** scheduling problems rarely tackled in prior literature, advancing the _Learn-to-Optimize_ field.\"}", "{\"title\": \"Response to Reviewer jCq6 cnt.\", \"comment\": \"**Q1: Explain the instability of PPO, in view of its application to long-horizon routing problems.**\\n\\nThe instability of PPO in our study is attributed to the unique challenges in horizon length, problem dynamicity, and complexity. \\n\\n1. **Extremely Long Horizon**: While routing problems in [2] and other recent works [3-4] have long horizons of up to $7\\\\times10^3$ decision steps, our DWS problem involves scheduling up to **$6\\\\times10^5$** workflow tasks (approx. **100 times longer** in horizon length). Such long horizon has been shown to seriously affect PPO's instability in [5]. \\n \\n2. **Dynamic and Evolving Environments**: In DWS, **workflows arrive unpredictably with random patterns**. The system state evolves as tasks are completed and new workflows arrive. Such **environmental dynamicity** may seriously **interfere with PPO\\u2019s training process**, which assumes more predictable state transitions [6-7]. \\n\\n3. **Complex Task-Machine Dependencies**: The time of processing a task on different machines can **vary 12 times**. DWS also involves **more flexible machine selections** and **intricate task-machine dependencies**, requiring precise value approximations and robust policy updates. These complexities challenge PPO\\u2019s ability to maintain stability during training.\\n\\nThese factors collectively explain why PPO experiences stability issues in our study. We will include additional discussions in the revised paper.\\n\\n---\\n[2] Hou, Q., Yang, J., Su, Y., Wang, X., \\\\& Deng, Y. (2023). Generalize learned heuristics to solve large-scale vehicle routing problems in real-time. In _ICLR_.\\n\\n[3] Zhou, J., Cao, Z., Wu, Y., Song, W., Ma, Y., Zhang, J., \\\\& Xu, C. (2024). MVMoE: Multi-task vehicle routing solver with mixture-of-experts. In _ICML_.\\n\\n[4] Bi, J., Ma, Y., Zhou, J., Song, W., Cao, Z., Wu, Y., \\\\& Zhang, J. (2024). Learning to handle complex constraints for vehicle routing problems. In _NeurIPS_.\\n\\n[5] Queeney, J., Paschalidis, Y., \\\\& Cassandras, C. G. (2021). Generalized proximal policy optimization with sample reuse. In _NeurIPS_.\\n\\n[6] Pan, F., Cai, Q., Zeng, A.X., Pan, C.X., Da, Q., He, H., He, Q., \\\\& Tang, P. (2019). Policy optimization with model-based explorations. In _AAAI_.\\n\\n[7] Liu, S. (2024). An evaluation of DDPG, TD3, SAC, and PPO: Deep reinforcement learning algorithms for controlling continuous system. In _DAI_.\"}", "{\"title\": \"General Response\", \"comment\": \"We sincerely thank the reviewers for their thorough evaluation and thoughtful feedback. We are delighted that the reviewers recognized the importance of our research topic in cloud computing (jCq6, 8vKM), the technical novelty and effectiveness of our approach (jCq6, RdYS), the strong performance compared to baselines (jCq6, Eeze, 8vKM, RdYS), the comprehensive nature of our experiments (jCq6, RdYS), and the clarity and quality of our presentation (jCq6, Eeze, 8vKM, RdYS).\\n\\nThe insightful comments have helped us refine our work. Detailed responses to each question are provided below, along with new experiments, additional discussions, and updated background information, which will be incorporated into the revised paper and appendix. We believe we have addressed all key concerns raised and kindly encourage reviewers to review our rebuttals and provide prompt feedback at their earliest convenience. We are happy to engage further with reviewers to clarify any remaining issues or explore additional suggestions.\"}", "{\"comment\": \"We deeply thank Reviewer jCq6 for the encouraging feedback, recognizing our efforts in the rebuttal, and raising the score to 8! Your comments truly motivate us, and we sincerely appreciate your recommendation.\"}", "{\"title\": \"Response to Reviewer RdYS cnt.\", \"comment\": \"**W4: Exploring other practical objectives to enhance the method's broader applicability.**\\n\\nWe confirm that our method **can support other practical objectives** beyond flowtime reduction, typically by modifying the reward function.\\n\\nTable R2 gives examples of considering cost as another objective. We can re-train the actor with a modified reward function to optimize for both **cost** (i.e., machine rental fees) and **flowtime**. Results show that incorporating costs leads to a _slight increase_ in mean flowtime (up to 8%) but achieves **significant cost savings** of up to 41% in some scenarios. Thus, with a modified reward function, GOODRL can **achieve a desirable trade-off** between flowtime and cost, highlighting its broader applicability.\\n| Scenarios | Objectives | Single-Obj. | Multi-Obj. | Diff. |\\n|-----------|-------------|-------------|------------|-------------|\\n| \\u27e85\\u00d75,5.4,30\\u27e9 | *flowtime* | 401.77 | 420.29 | +4.61% |\\n| | *cost* | 139.82 | 82.28 | -41.15% |\\n| \\u27e85\\u00d75,9,30\\u27e9 | *flowtime* | 408.49 | 413.02 | +1.11% |\\n| | *cost* | 116.32 | 97.51 | -16.17% |\\n| \\u27e86\\u00d74,5.4,30\\u27e9 | *flowtime* | 277.57 | 286.73 | +3.30% |\\n| | *cost* | 192.24 | 143.47 | -25.37% |\\n| \\u27e86\\u00d74,9,30\\u27e9 | *flowtime* | 285.93 | 306.90 | +7.33% |\\n| | *cost* | 135.58 | 91.18 | -32.75% |\\n\\n**Table R2. Performance comparison of policies trained with single and multi-objective.**\\n\\nWe recognize the importance of considering **energy efficiency** in cloud scheduling. However, since _energy consumption depends on the workload of physical machines (PMs)_ managed by cloud providers, it falls outside the scope of our workflow scheduler.\\n\\nIf future opportunities allow us to collaborate with cloud providers or access VM-to-PM allocation data, we will consider integrating energy efficiency into our study to further broaden the applicability of GOODRL. The above discussions and experiments will be updated in our paper.\\n\\n**W5: Discussion on how the model generalizes to different workloads.**\\n\\nWe agree that discussing the model's generalization to varied workloads would enhance the paper and highlight its robustness. \\n\\nTable 2 in our paper shows that the pre-trained **GOODRL policy** performs well across scenarios with **varied workloads** or **machine numbers**, consistently outperforming baselines and **demonstrating strong generalization**.\", \"our_model_achieves_robustness_in_dynamic_cloud_environments_in_two_key_aspects\": \"- **Dynamic Graph Representation**: The proposed dynamic graph structure **continuously updates** based on real-time system states (e.g., varying number of workflows or machine load conditions), enabling it to effectively capture varied workloads.\\n- **Offline-Online Training Method**\\uff1aThis method enables the offline-trained actor to **continuously enhance its performance** by learning from online experiences during daily operations in dynamic cloud environments.\\n\\nWe will enhance this discussion with experiments demonstrating the model's adaptability to varied workloads. Detailed results are included in **the response of Q1** and will be reflected in the updated paper.\\n\\n**Q1: Model adaption to significant changes in workflow patterns or cloud configurations without extensive retraining.**\\n\\nWe conducted additional experiments to evaluate the actor network's generalization to changes in workflow patterns, arrival rates, and cloud configurations without retraining. \\n\\n| Scenarios | Wf. | Arr. | Mach. | EST | PEFT | HEFT | GP | ESRL | Ours |\\n|-----------|----|-----|------|---------|---------|---------|---------|---------|---------|\\n| 1 | \\u2713 | -- | -- | 1954.59 | 961.26 | 881.55 | 962.35 | 14103.8 | **862.6** |\\n| 2 | \\u2713 | \\u2713 | -- | 2114.21 | 1005.76 | 904.06 | 832.37 | 6403.65 | **791.9** |\\n| 3 | -- | \\u2713 | 3\\u00d715 | 1793.76 | 927.33 | 872.71 | 1015.96 | 3208.32 | **761.2** |\\n| 4 | -- | \\u2713 | 4\\u00d710 | 1512.44 | 684.15 | 643.34 | 517.05 | 2696.69 | **509.2** |\\n| 5 | -- | \\u2713 | 5\\u00d77 | 1317.28 | 561.51 | 513.70 | 396.07 | 2534.30 | **385.4** |\\n| 6 | -- | \\u2713 | 6\\u00d75 | 1190.84 | 450.93 | 404.47 | 286.00 | 2420.63 | **282.1** |\\n\\n**Table R3. Varied scenarios in workflow patterns, arrival rates, and machine numbers.**\\n\\nResults in Table R3 show it effectively handles these variations. \\n- *Scenarios 1 and 2*, with **only compute-intensive workflow patterns** (i.e., 20 times larger than normal), show that our model performs well under significant workflow changes. \\n- *Scenarios 3 to 6*, with **variations in machine configurations** (i.e., configurations \\u00d7 each quantity), demonstrate the model's adaptability to cloud configuration changes.\\n\\nOur model maintains robust performance under diverse conditions due to its **dynamic graph representation** and **offline-online training** method. We will include these experiments and discussions in the revised paper.\"}", "{\"comment\": \"Dear authors,\\n\\nThank you for your detailed response and the comprehensive experiments that are added. I have thoroughly reviewed your updated manuscript, as well as your responses to the other reviewers. I don't have any further questions on how well your approach works on this particular problem --- your experiments demonstrate this part very well. However, the three contributions you highlighted, 1) applying GAT to each task structure, 2) using separate encoders for actor and critic, and 3) offline imitation learning and online adaptation, they don't seem to be entirely novel for a ML conference. To me, they seem more like taking off-the-shelf methods, and tailoring them for a specific system problem. For this reason, while I appreciate the comprehensiveness of your experiments and the clarity of your writing, I think this paper might be more appropriate for a systems conference.\"}", "{\"comment\": \"We greatly appreciate the reviewer for carefully reviewing our response, recognizing the value of our work and raising the score!\"}", "{\"title\": \"Response to Reviewer 8vKM cnt.\", \"comment\": \"**W2: Difference between our method and Wang et al. 2022 [7].**\\n\\nWe will cite [7] and clarify our key differences from [7]:\\n\\n1. **Problem Scope**: [7] focuses on failure-predictive maintenance scheduling, while we address DWS. Our method is also **applicable to related scheduling problems** such as Flexible Job Shop Scheduling (FJSS) [4]. Table R1 demonstrates the transferability of our method, which achieves **competitive results**.\\n| FJSS Size | MOR | SPT | FIFO | MWKR | DRL-G | DRL-S | Ours |\\n|-------------|--------|--------|--------|--------|--------|--------|--------|\\n| 10\\u00d75 | 116.69 | 129.06 | 119.62 | 115.29 | 111.67 | 105.61 | 112.57 |\\n| 20\\u00d75 | 217.17 | 229.89 | 216.13 | 216.98 | 211.22 | 207.50 | 202.38 |\\n| 30\\u00d710 | 320.18 | 347.40 | 328.50 | 319.89 | 313.04 | 312.20 | 304.63 |\\n| 40\\u00d710 | 425.19 | 443.30 | 427.22 | 425.70 | 416.18 | 415.15 | 395.70 |\\n\\n **Table R1. Results on FJSS instance with different sizes.**\\n\\n2. **Problem Scale**: [7] addresses relatively *small problems* (up to 96 aircraft), while we handle much larger scales with up to 600,000 tasks, approx. **3,000 times difference in scale**.\\n\\n3. **Task Dependencies and Graph Representation**: [7] assumes *independent* tasks. DWS involves **intricate inter-task dependencies, workflow interactions**, and **task-machine heterogeneity**. Our task-specific and system-oriented graph representations can effectively capture these complexities.\\n\\n4. **RL Algorithm**: [7] *avoids Actor-Critic* methods. In contrast, we propose specialized graph representation for the critic and incorporate self-attention layers to significantly **enhance value function accuracy** and **stabilize online training**.\\n\\n5. **Online Learning**: Online learning in [7] refers to the adaptation of *predicted failures*. In contrast, our method **continuously improves** the offline-trained actor using real-world experiences, ensuring **adaptability to dynamic demands**.\\n\\nWe will include these distinctions in the revised paper to highlight the novelty of our approach.\\n\\n**W3: Clarify the significance of improvements provided by the Online RL component.**\\n\\nThe Online RL component is important in ensuring **long-term stability** and **practical utility** for **large-scale** DWS. It is valuable in two key aspects:\\n\\n1. **Long-term utility**: Online RL continuously learns from new experiences, **avoiding the costly retraining required by offline RL**. It is highly sample-efficient, responsive, and adaptable, making it essential for maintaining high performance in dynamic environments.\\n\\n2. **Practical significance**: While the 2% improvement may appear small, its absolute impact is substantial for large-scale problems. As shown in Table R2, even slight reductions in flowtime (e.g., 0.84%) can **lead to significant cost savings** (e.g., 36.11% in machine rental fees), demonstrating the **broader practical benefits** (paper will be updated accordingly).\\n\\n| Scenarios | Objectives | Plan-1 | Plan-2 | Diff. |\\n|-------------|------------|----------|----------|-----------------|\\n| \\u27e85\\u00d75,5.4,30\\u27e9 | Flowtime | 421.94 | 420.29 | 0.39% |\\n| | Cost | 102.44 | 82.28 | 19.68% |\\n| \\u27e85\\u00d75,9,30\\u27e9 | Flowtime | 416.53 | 413.02 | 0.84% |\\n| | Cost | 152.63 | 97.51 | 36.11% |\\n| \\u27e86\\u00d74,5.4,30\\u27e9 | Flowtime | 292.81 | 286.73 | 2.08% |\\n| | Cost | 188.43 | 143.47 | 23.86% |\\n| \\u27e86\\u00d74,9,30\\u27e9 | Flowtime | 309.70 | 306.90 | 0.90% |\\n| | Cost | 137.43 | 91.18 | 33.65% |\\n\\n**Table R2. Comparison of two scheduling plans in flowtime and cost.**\"}", "{\"title\": \"Response to Reviewer 8vKM\", \"comment\": \"We thank the reviewer for the detailed feedback. We hope our following response can address your concerns.\\n\\n**W1: Novelty of applying reinforcement learning and graph attention networks.**\\n\\nGNNs and RL have been widely applied to combinatorial optimization problems (COPs), including VRP [8-10], JSS [11-15], and MATA [5-6]. However, **each problem domain presents unique challenges that require tailored graph representations and network architectures**, often forming the **core contributions**. For example, VRP studies often use encoder-decoder models [8-10], while JSS and MATA rely on GNNs to handle complex constraints [5-6,11-15].\\n\\nOur work focuses on Dynamic Workflow Scheduling (DWS), which _has great potential to push the boundary of using DRL to solve real-world COPs, especially scheduling problems_, in several aspects:\\n\\n- Dynamically **evolving states** represented by graphs with changing structures;\\n- **Large problem size** (600,000 tasks), far exceeding the scales of previously studied scheduling problems;\\n- **Sophisticated constraints** that arise from intricate inter-task dependencies, flexible machine choices, and highly heterogeneous task processing time.\\n\\nExisting graph representations and learning frameworks used in JSS or MATA [5-6,11-15] are **insufficient to handle these complexities**. Additionally, a reputable survey [16] highlights **the lack of effective GNN-based methods for workflow scheduling**, with most existing approaches relying on **simpler vector** or **matrix representations.**\\n\\nBuilding on these challenges, our contributions include: (1) problem-tailored **graph representations** and **architectures** for **large-scale** DWS, (2) **independently designed** and trained actor and critic; and (3) a **novel offline-online learning** framework ensuring _stability_ and _adaptability_ in dynamic environments. _These innovations push the boundary of using DRL for real-world scheduling problems._ We will clarify these novelties in the revised paper.\\n\\n---\\n[8] Hou, Q., Yang, J., Su, Y., Wang, X., \\\\& Deng, Y. (2023). Generalize learned heuristics to solve large-scale vehicle routing problems in real-time. In _ICLR_.\\n\\n[9] Zhou, J., Cao, Z., Wu, Y., Song, W., Ma, Y., Zhang, J., \\\\& Xu, C. (2024). MVMoE: Multi-task vehicle routing solver with mixture-of-experts. In _ICML_.\\n\\n[10] Bi, J., Ma, Y., Zhou, J., Song, W., Cao, Z., Wu, Y., \\\\& Zhang, J. (2024). Learning to handle complex constraints for vehicle routing problems. In _NeurIPS_.\\n\\n[11] Zhang, C., Song, W., Cao, Z., Zhang, J., Tan, P. S., \\\\& Chi, X. (2020). Learning to dispatch for job shop scheduling via deep reinforcement learning. In _NeurIPS_.\\n\\n[12] Zhang, C., Cao, Z., Song, W., Wu, Y., \\\\& Zhang, J. (2024). Deep reinforcement learning guided improvement heuristic for job shop scheduling. In _ICLR_. \\n\\n[13] Park, J., Chun, J., Kim, S. H., Kim, Y., \\\\& Park, J. (2021). Learning to schedule job-shop problems: representation and policy learning using graph neural network and reinforcement learning. _International Journal of Production Research_.\\n\\n[14] Su, C., Zhang, C., Xia, D., Han, B., Wang, C., Chen, G., \\\\& Xie, L. (2023). Evolution strategies-based optimized graph reinforcement learning for solving dynamic job shop scheduling problem. _Applied Soft Computing_.\\n\\n[15] Huang, J. P., Gao, L., \\\\& Li, X. Y. (2024). An end-to-end deep reinforcement learning method based on graph neural network for distributed job-shop scheduling problem. _Expert Systems with Applications_.\\n\\n[16] Jayanetti, A., Halgamuge, S., \\\\& Buyya, R. (2024). Reinforcement learning based workflow scheduling in cloud and edge computing environments: a taxonomy, review and future directions. arXiv:2408.02938.\"}", "{\"title\": \"Response to Reviewer Eeze cnt.\", \"comment\": \"**Q1: Temporal variability in workflow arrival patterns and their quantile plot.**\\n\\nWorkflow scheduling research typically assumes **Poisson-distributed arrival patterns** [4-6], supported theoretically by [6]. While some studies [5,7] considered real-world data, the actual arrival times are still simulated by Poisson distributions. Our work aligns with this common practice in the literature, which will be further clarified in the revised paper. Furthermore, our online learning method can **adapt to substantial temporal variability in arrival rates or patterns**, meeting the critical demands of real-world applications.\\n\\nThe 95\\\\%-99\\\\% quantiles of workflow arrivals at different arrival rates are presented in Table R2. The corresponding plots will be included in the revised paper. \\n| Arrival Rates (workflows/h) | 95% Quantile | 99% Quantile |\\n|-----------------------------|------------------------|------------------------|\\n| $\\\\lambda=5.4$ | 9.3 (approx. 280 tasks/h) | 11.7 (approx. 350 tasks/h) |\\n| $\\\\lambda = 9.0$ | 14.0 (approx. 420 tasks/h) | 16.0 (approx. 480 tasks/h) |\\n\\n**Table R2. 95\\\\%-99\\\\% quantiles of workflow arrivals at different arrival rates.**\\n\\n**Q2: Inference time efficiency compared to baseline approaches.**\\n\\nAs shown in Table R3, our model takes **only 6-7 ms** to make a decision. Although ESRL and GPHH are faster, our model's inference time is less than the communication latency and data transfer time in cloud, hence **short enough to meet real-world requirements**. New results and discussions will be updated in the revised paper.\\n| Scenarios | GPHH | ESRL | Ours |\\n|------------------------------|--------|--------|--------|\\n| $\\\\langle 5 \\\\times 5,5.4,1k \\\\rangle$ | 0.7 ms | 2.6 ms | 6.1 ms |\\n| $\\\\langle 5 \\\\times 5,9,1k \\\\rangle$ | 1.0 ms | 2.7 ms | 7.6 ms |\\n| $\\\\langle 6 \\\\times 4,5.4,1k \\\\rangle$ | 0.6 ms | 2.7 ms | 6 ms |\\n| $\\\\langle 6 \\\\times 4,9,1k \\\\rangle$ | 0.7 ms | 2.5 ms | 6.8 ms |\\n\\n**Table R3. The inference time to make a decision.**\\n\\n**Q3: Robustness of the approach to fluctuating workflow arrivals.**\\n\\nWe agree that workflow arrivals can fluctuate during extreme events. We conducted further experiments where we **varied the arrival rate** by \\u00b150% to +100% compared to the training data. In Table R4, GOODRL **consistently outperforms all** competing approaches, demonstrating its **robustness to such fluctuations** and ability to adapt to dynamic changes.\\n| Arrival Rates | EST | PEFT | HEFT | GP | ESRL | Ours |\\n|---------------|---------|--------|--------|--------|---------|--------|\\n| -50% | 1288.59 | 626.37 | 567.55 | 403.99 | 2328.24 | **398.01** |\\n| +50% | 1165.15 | 516.60 | 481.70 | 413.99 | 4076.43 | **409.66** |\\n| +100% | 1112.05 | 498.06 | 469.46 | 424.92 | 5255.60 | **423.02** |\\n\\n**Table R4. Performance comparison under changed arrival rates.** (to be included in the revised paper)\\n\\n**Q4: Adaptability of the online algorithm to performance drops of the offline-trained actor.**\\n\\nShifts in workflow arrival rate can impact the performance of the offline-trained actor. However, the mean flowtime **remains robust upon increasing the arrival rate** from 5.4 to 9 (see Table 2 in the paper). To further investigate, we conducted **additional experiments** to test the adaptability of our online algorithm.\\n\\nWe introduced extra noise $\\\\epsilon$ at different levels to degrade the offline actor\\u2019s performance and trained this noise-infused actor online. In Table R5, performance initially dropped by 3.7\\u20133.8\\\\%, but after 150 online training iterations, the **gap reduced** to 1\\u20132\\\\%. Hence, online learning can **quickly adapt to distribution shifts** and **recover performance** effectively.\\n\\n| Noise | 0-th | 25-th | 50-th | 75-th | 100-th | 125-th | 150-th |\\n|-------------|--------|--------|--------|--------|--------|--------|--------|\\n| \\u03b5 = 0.05 | 3.71% | 3.61% | 3.49% | 3.20% | 2.21% | 1.94% | 1.95% |\\n| \\u03b5 = 0.1 | 3.87% | 3.66% | 2.92% | 2.42% | 1.46% | 1.10% | 1.06% |\\n\\n**Table R5. Performance gap with the no-nosie actor across online training iterations.** (to be included in the revised paper)\\n\\n**Q5: Handling resource contention and performance interference in task allocation**\\n\\nResource contention typically arises when multiple tasks run simultaneously on the same VM. In dynamic workflow scheduling, each VM **processes its tasks sequentially**. We also ensure that each VM has sufficient memory before assigning a task. Thus, **resource contention is unlikely to be a significant issue** in our problem. We will clarify this in the revised paper.\"}", "{\"comment\": \"We sincerely thank the reviewer for thoroughly reviewing our responses, acknowledging the value of our work, and raising the score!\"}" ] }
4OaO3GjP7k
Flat Reward in Policy Parameter Space Implies Robust Reinforcement Learning
[ "Hyun Kyu Lee", "Sung Whan Yoon" ]
Investigating flat minima on loss surfaces in parameter space is well-documented in the supervised learning context, highlighting its advantages for model generalization. However, limited attention has been paid to the reinforcement learning (RL) context, where the impact of flatter reward landscapes in policy parameter space remains largely unexplored. Beyond merely extrapolating from supervised learning, which suggests a link between flat reward landscapes and enhanced generalization, we aim to formally connect the flatness of the reward surface to the robustness of RL models. In policy models where a deep neural network determines actions, flatter reward landscapes in response to parameter perturbations lead to consistent rewards even when actions are perturbed. Moreover, robustness to action perturbations further enhances robustness against other variations, such as changes in state transition probabilities and reward functions. We extensively simulate various RL environments, confirming the consistent benefits of flatter reward landscapes in enhancing the robustness of RL under diverse conditions, including action selection, transition dynamics, and reward functions. The code for these experiments is available at https://github.com/HK-05/flatreward-RRL.
[ "Reinforcement learning", "Flat Minima", "Robust Reinforcement learning" ]
Accept (Oral)
https://openreview.net/pdf?id=4OaO3GjP7k
https://openreview.net/forum?id=4OaO3GjP7k
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zAOFH32AiC", "y2H46uUcJS", "uM7R3Sv9oY", "rfNNtv3GDd", "qDinmC68oh", "nln8mqmqKT", "mkZIW3gsem", "m9sUJXrSsm", "m2ujjE50jU", "kp5OerT7TS", "khfAZ6Pw5m", "jOciwdQ1gm", "j9eLjIqPIz", "gpPIqcossX", "fgtJOstnJR", "ccqia3mHS2", "anVjKYzVFn", "XTE6xQYuHB", "XQbxaUjdhm", "Wf3R7qwHnP", "W920x4CTOp", "NO9zvs2vCY", "LctRfRiFoj", "LboG0zwRmC", "Jt5M0SwtsN", "HJS9ymeHuT", "GUxgH49d9u", "F3fyQSAmQB", "8Q7hfMUuHc", "7VziRFV0D0", "7F3uphWN3X", "6ulbqaFBa9", "4929rDLIwJ", "3h1XcRARH0", "20fX8givWV", "0j45yrhnw7" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732747489945, 1732445665041, 1732447329175, 1732443418873, 1732563258151, 1732446185062, 1734744523206, 1732707364347, 1732446492607, 1732443845722, 1732459988186, 1732447097161, 1732597024056, 1730673741156, 1730476000249, 1732707078294, 1732556036620, 1732443727734, 1732707244537, 1732446136941, 1732446798235, 1732446634940, 1732567816150, 1732733072074, 1730484155737, 1730226445861, 1732447291432, 1732447446644, 1732545834781, 1732443890533, 1732572092443, 1737523497350, 1732558978962, 1732456233503, 1732446420724, 1732558236251 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission2326/Authors" ], [ "ICLR.cc/2025/Conference/Submission2326/Authors" ], [ "ICLR.cc/2025/Conference/Submission2326/Authors" ], [ "ICLR.cc/2025/Conference/Submission2326/Authors" ], [ "ICLR.cc/2025/Conference/Submission2326/Reviewer_uQqo" ], [ "ICLR.cc/2025/Conference/Submission2326/Authors" ], [ "ICLR.cc/2025/Conference/Submission2326/Area_Chair_EFuD" ], [ "ICLR.cc/2025/Conference/Submission2326/Authors" ], [ "ICLR.cc/2025/Conference/Submission2326/Authors" ], [ "ICLR.cc/2025/Conference/Submission2326/Authors" ], [ "ICLR.cc/2025/Conference/Submission2326/Authors" ], [ "ICLR.cc/2025/Conference/Submission2326/Authors" ], [ "ICLR.cc/2025/Conference/Submission2326/Authors" ], [ "ICLR.cc/2025/Conference/Submission2326/Reviewer_jacH" ], [ "ICLR.cc/2025/Conference/Submission2326/Reviewer_uQqo" ], [ "ICLR.cc/2025/Conference/Submission2326/Authors" ], [ "ICLR.cc/2025/Conference/Submission2326/Reviewer_jafF" ], [ "ICLR.cc/2025/Conference/Submission2326/Authors" ], [ "ICLR.cc/2025/Conference/Submission2326/Authors" ], [ "ICLR.cc/2025/Conference/Submission2326/Authors" ], [ "ICLR.cc/2025/Conference/Submission2326/Authors" ], [ "ICLR.cc/2025/Conference/Submission2326/Authors" ], [ "ICLR.cc/2025/Conference/Submission2326/Authors" ], [ "ICLR.cc/2025/Conference/Submission2326/Reviewer_jafF" ], [ "ICLR.cc/2025/Conference/Submission2326/Reviewer_RSs8" ], [ "ICLR.cc/2025/Conference/Submission2326/Reviewer_jafF" ], [ "ICLR.cc/2025/Conference/Submission2326/Authors" ], [ "ICLR.cc/2025/Conference/Submission2326/Authors" ], [ "ICLR.cc/2025/Conference/Submission2326/Reviewer_jafF" ], [ "ICLR.cc/2025/Conference/Submission2326/Authors" ], [ "ICLR.cc/2025/Conference/Submission2326/Reviewer_jacH" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission2326/Reviewer_jafF" ], [ "ICLR.cc/2025/Conference/Submission2326/Reviewer_RSs8" ], [ "ICLR.cc/2025/Conference/Submission2326/Authors" ], [ "ICLR.cc/2025/Conference/Submission2326/Reviewer_jafF" ] ], "structured_content_str": [ "{\"comment\": \"Thank you again for your thoughtful and constructive feedback. We are encouraged to hear that our responses and the subsequent revisions have positively influenced your perception of our work. Your insightful review has been instrumental in guiding the significant improvements made to our manuscript.We look forward to the opportunity to contribute further to the field and to see how our work will influence future research.\"}", "{\"title\": \"Official Comment by Authors\", \"comment\": [\"We thank the reviewer for their thoughtful feedback and valuable suggestions. As detailed below, we have addressed the weaknesses and questions raised.\", \"**Weakness 1: Correction mistakes in the Abstract and Introduction**\", \"We apologize for any difficulties caused by grammatical errors in the abstract and introduction. Also, we greatly appreciate your careful suggestions. In the revised manuscript, we have thoroughly revised the entire paper and corrected grammatical errors, enhancing overall clarity.\", \"**Weakness 2: Clarification of Definition 1**\", \"We have revised the notation to be clearer. We have revised Definition 1 to explicitly state that $\\\\epsilon$ is the perturbation added to the policy parameter $\\\\theta$.\", \"Specifically, we have modified the expectation notation from\", \"$\\\\mathbb{E}$$s,a\\\\sim\\\\pi_{\\\\theta^*+\\\\epsilon}$ to $\\\\mathbb{E}$${s \\\\sim p, a \\\\sim \\\\pi_{\\\\theta^* + \\\\epsilon}(a|s)}$\", \"Specifically, we have revised to indicate that the state $s$ is sampled from the transition probability distribution $s\\\\sim p$, and the action $a$ is sampled from the perturbed policy $a\\\\sim\\\\pi_{\\\\theta^*+\\\\epsilon}(a|s)$.\", \"**Weakness 3: Unbounded Jacobian and Practical Guarantees**\", \"In practice, it is hard to guarantee bounded Jacobian of deep neural networks. However, popular techniques such as weight regularization, gradient clipping, and bounded activation functions help control the magnitude of the network's weights and gradients, effectively constraining the Jacobian.\", \"When considering the SAM\\u2019s objective, SAM promotes convergence to flatter regions in the loss landscape by considering parameter perturbations during optimization. This process inherently discourages sharp changes in the loss with respect to parameter changes, which is related to the second derivative (Hessian) and indirectly to the Jacobian. Also, when SAM finds loss minima, SAM suppresses the norm of gradients, making the Jacobian smoother and more stable around the minima. It is analogous to the main objective of SAM to find smoother loss surfaces around minima.\", \"Our empirical results demonstrate that SAM+PPO leads to policies that are more robust to perturbations, supporting the practical relevance of our theoretical findings.\", \"We have added the discussion for the bounds of Jacobian in the right after the proof of our Proposition in the revised paper.\", \"**Weakness 4: Baseline comparisons and the tests in the RNAC environments**\", \"We appreciate your careful comments on the baseline comparisons, along with suggestions for the open-source code of RNAC.\", \"We emphasize that we have done experiments for RNAC with the shared code, and the hyperparameters of RNAC are the same as those used in the original paper. Therefore, we are quite sure that the RNAC experiments in our paper do not have any technical mistakes or intentional picking of worse cases.\", \"To figure out the reason behind the not well-performing results of RNAC, we want to point out that the perturbation settings in the RNAC paper are not as challenging as we did in our paper. Specifically, the RNAC paper\\u2019s robustness evaluation is conducted with perturbation of changing stiffness of the actuator joint (e.g., leg joint stiffness), which is generally easier for agents to handle. It directly affects the actuator joint where the action is applied, making it easy to predict and adjust to the changes. In contrast, we evaluated the robustness of the change in mass and ground friction, which introduce indirect and widespread effects on the dynamics. The agent is required to infer and adapt to more complex interactions in the environment. Also, we widely tested robustness against action and reward, which is not covered in the RNAC paper. We conjecture that RNAC is less effective in handling these varieties of perturbations than SAM+PPO.\", \"Also, even in the original RNAC paper, it does not outperform PPO but shows modest gains over PPO. This suggests that the performance differences between RNAC and PPO are consistent across the RNAC paper and ours.\", \"In conclusion, we want to emphasize that our empirical comparisons are not cherry-picking, and our simulations of RNAC are reliable. Furthermore, it confirms that RNAC is less robust in various factors of environmental changes.\"]}", "{\"comment\": [\"**Question 0: Discussions of the range of flatness in reward surface**\", \"When going back to the pioneering work of SAM, it is infeasible to anticipate how much a wide region of flatness around the solution can be achieved after training deep architecture via SAM. Herein, we tried to do our best to provide a general intuition for thinking of the range of flatness.\", \"For the SAM objective, $\\\\rho$ works as a critical hyperparameter to control the range of flatness. It is the radius of allowed perturbations on parameter space. Because SAM finds the worst cases within the radius $\\\\rho$ around the parameter, it forces the RL agent to search for the policy, where it shows minimal reward decrease within a sphere of radius $\\\\rho$ around the policy parameter. Thus, ideally, sufficiently trained SAM+PPO would show $\\\\rho$-radius flat reward region.\", \"In practice, we used $\\\\rho=0.01$, thus it can be interpreted that SAM+PPO aims to find a 0.01-radius flat reward surface. However, it does not mean that a larger $\\\\rho$ is beneficial. Too large radius probably makes the policy struggle to find too-wide flat maxima, which is rarely existing in the parameter space. Therefore, it is crucial to find the optimal $\\\\rho$. We have added the ablations in Appendix C.3.\", \"We want to point out that this intuition is not solely new to our work but is widely accepted in the prior SAM-related literature. However, the prior literature focuses on supervised learning, not reinforcement learning.\", \"**Question 1: Training steps, hyperparameters**\", \"All agents were trained for 3,000,000 environment steps. We ensured that each agent had the same amount of interaction with the environment by maintaining an equal number of environment steps. This approach provides a fair comparison of performance and learning efficiency among the different algorithms.\", \"For PPO, SAM+PPO, and RNAC experiments, we adopted the hyperparameters provided in the RNAC paper for each environment. This includes settings such as learning rates, batch sizes, discount factors, GAE lambda, and other algorithm-specific parameters. For RARL, we used the hyperparameters specified in the RARL paper. The only hyperparameters we tuned were those introduced when applying SAM to PPO, specifically the SAM perturbation radius ($\\u03c1$). We experimented with different values of $\\u03c1$ (e.g., 0.01, 0.05, 0.1) to find the optimal setting that enhances robustness without adversely affecting training stability. The complete list of hyperparameters for each algorithm and environment is additionally provided in Appendix B.2.\", \"By using the hyperparameters from the RNAC and RARL papers, we aimed to ensure that our experiments are directly comparable to prior work and that any performance differences are due to the algorithms themselves rather than differing hyperparameter choices.\", \"**Question 2: Agent's action scale, dealing with noise added outside of the action range**\", \"In the environments we used, the action spaces are continuous and bounded. For example, in MuJoCo environments like Hopper-v3 and Walker2d-v3, the action values are within the range [-1, 1] for each action dimension.\", \"When evaluating robustness to action perturbations, we add Gaussian noise to the agent's actions during testing. If the addition of noise results in actions outside the valid range, we clip the actions to the allowable bounds of the environment. This ensures that all actions remain valid and prevents errors during simulation. We have clarified this procedure in Section 5.2, specifying how action noise is handled and how actions are kept within the valid range.\"]}", "{\"title\": \"Overall Comments : Summary of revisions to the original paper\", \"comment\": [\"We would like to express our sincere gratitude for your thoughtful and constructive feedback on our manuscript (highlighted in blue-colored font).\", \"We have carefully considered all your comments and have made comprehensive revisions to address your concerns.\", \"Here, we provide a detailed list of the changes made to the manuscript, organized by sections:\", \"**Abstract & Section 1. Introduction**\", \"**Grammar correction** : We have thoroughly revised the entire paper, corrected grammatical errors, enhanced overall clarity.\", \"**Preliminary experiment**\", \"Refined the transition into the preliminary experiment to ensure it flows naturally from the background and motivation. We clarified the purpose of the preliminary experiment, emphasizing how it illustrates the necessity for our proposed method.\", \"Redesigned Figure 1 by adding the mini-figure that has increased the zoom level, to make key features more visible, illustrating the essence of the preliminary experiments.\", \"**Section 3. Preliminaries**\", \"**Equation 1. clarification** : $\\\\mathbb{E}$${s, a \\\\sim \\\\pi}$ to $\\\\mathbb{E}$${s \\\\sim p, a \\\\sim \\\\pi}$\", \"**Details of Sharpness Aware Minimization** : Provided in Appendix C.\", \"**Section 4. Linking Flat Reward to Action Robustness**\", \"**Definition 1. clarification** : $\\\\mathbb{E}$${s, a \\\\sim \\\\pi_{\\\\theta^* + \\\\epsilon}}$ to $\\\\mathbb{E}$${s \\\\sim p, a \\\\sim \\\\pi_{\\\\theta^* + \\\\epsilon}(a|s)}$\", \"**Added Figure 2** : Visualization of Definitions 1, 2\", \"**Proposition 1 clarification** : Revised the statement to understand the link between Definition 1, 2\", \"**Remark 1.2 clarification** : \\u2018For changes of reward function\\u2019 \\u2192 \\u2019For reward function perturbations,. etc.\", \"**Section 5. Experimental Results**\", \"**Extended evaluation :** Added results of RNAC and RARL across all evaluation\", \"**Appendix A. Proof of Proposition 1**\", \"**Policy stated as probability distribution** : Revised the proof to ensure that the policy is consistently defined as a probability distribution throughout the proof\", \"**Added Appendix A.2** : Discussion on the bounds of the Jacobian\", \"**Appendix C. Details and Analysis of SAM integrated with PPO**\", \"**Detailed discussion** : Provided a detailed discussion on the integration of SAM with RL\", \"**Appendix D. Additional Experimental Results**\", \"**SAM enhanced other RL algorithm(SAM+TRPO)** : Added to validate the applicability and reliability of our SAM-enhanced method in a broader contex\", \"**Discrete action environments provided by OpenAI Gym(Cartpole-v1, Lunarlander-v2)** : Added to assess the performance of SAM+PPO in settings with discrete action\", \"spaces and different reward structures.\", \"**Appendix E. Comparison with existing Robust RL algorithms**\", \"**Reorganized content** : Moved the extra experiment from Section 5.6 in the original paper to Appendix E\", \"**Ablation studies** : Provide ablation to understand how SAM enhanced RL bring benefits in comparison with other algorithms\"]}", "{\"title\": \"Thanks for the detailed response\", \"comment\": \"thanks the authors for the time and effort preparing the response.\\nI am happy to raise the score.\"}", "{\"title\": \"Official Comment by Authors\", \"comment\": \"- To answer Question 1, we have considered TRPO as a new baseline and demonstrated how well SAM works in conjunction with TRPO.\\n- Specifically, we extended our study by integrating SAM with TRPO, resulting in SAM+TRPO. TRPO uses trust region optimization to ensure stable policy updates by directly constraining the KL divergence between the old and new policies. As detailed in Appendix D, SAM+TRPO outperforms standard TRPO in several environments, indicating that the benefits of SAM are not limited to PPO. The results show that SAM enhances the robustness of TRPO, suggesting that promoting flatness in the loss landscape is beneficial across different optimization frameworks.\\n\\n| Perturbation | Metric | HalfCheetah-v3 | | Hopper-v3 | | Walker2d-v3 | |\\n| --- | --- | --- | --- | --- | --- | --- | --- |\\n| | | Nominal | Perturbed | Nominal | Perturbed | Nominal | Perturbed |\\n| Action Noise \\u03c3 = 0.2 | TRPO | 4805 | 1502 (\\u22123303) | 3118 | 1452 (\\u22121666) | 4975 | 603 (\\u22124372) |\\n| | SAM+TRPO | **5502** | **3975 (\\u22121527)** | **3547** | **2313 (\\u22121234)** | **5097** | **2052 (\\u22123045)** |\\n| Mass Scale Factor 1.2 | TRPO | 4837 | 3865 (\\u2212972) | 3215 | 1556 (\\u22121659) | 4957 | 782 (\\u22124175) |\\n| | SAM+TRPO | **5562** | **5210 (\\u2212352)** | **3499** | **3508 (+9)** | **5205** | **5284 (+79)** |\\n| Friction Coefficient 0.88 | TRPO | 4723 | **4774 (+51)** | 3075 | 1580 (\\u22121495) | 4996 | 4756 (\\u2212240) |\\n| | SAM+TRPO | **5562** | 5539 (\\u221223) | **3498** | **2728 (\\u2212770)** | **5073** | **5134 (+61)** |\\n- While we focused on integrating TRPO and additional OpenAI Gym environments due to the limited rebuttal period, we highly acknowledge the value of safe-control-gym and Safety Gymnasium, which offers a rich set of environments and baseline implementations for safe and robust RL. We plan to incorporate environments and baselines from these resources in future research to further evaluate our method in safety-critical settings and against a broader range of robust RL formulations.\\n- **Synergy with PPO's Clipping Mechanism:** PPO's clipping mechanism serves as an approximation to trust region optimization, preventing large updates that could destabilize training. This clipping synergizes well with SAM's objective of seeking flat minima by limiting the parameter updates to a stable region. PPO approximates trust region optimization through clipping, ensuring that the updated policy does not deviate excessively from the old policy. SAM complements this by seeking parameter regions where such deviations are less sensitive to perturbations.\\n\\nWe truly appreciate your time for reviewing our work. We hope our response clarifies the raised concerns.\"}", "{\"metareview\": \"The paper investigates the relationship between flat minima and robustness in RL, finding that flatter minima correspond to more robust policies. These theoretical claims are supported with empirical results, including a new variant of PPO that uses a sharpness aware minimizer.\\n\\nThe reviewers appreciated the novelty of the analysis in the paper, the clear theoretical results, and the strong empirical results of the proposed method. The reviewers also appreciated the use of visualizations and metrics to bolster claims relating flatness with robustness. The reviewers note that sharpness aware minimizers have been used in conjunction with PPO before, but appreciate that the authors clearly describe the relationship and contributions relative to this prior work. Finally, the reviewers appreciate the improved computational complexity and sample complexity of the proposed method.\\n\\nThe reviewers had some suggestions for clarifying the paper writing (grammar, statements of some of the theoretical results) and suggestions for additional baselines and visualizations.\\n\\nOverall, this is a very well written paper about a novel perspective on robustness, and should be appreciated by many members of the ICLR community.\", \"additional_comments_on_reviewer_discussion\": \"During a robust discussion period, the authors addressed concerns about computational complexity, hyperparameter sensitivity, ablations, and applicability to new tasks. The reviewers also revised the paper (e.g., definitions). One reviewer explicitly lauded the authors for revising the paper (during the rebuttal) in terms of experiments, theory, and presentation.\"}", "{\"comment\": [\"**Comment 12: Future plan for improving theoretical claims** (lower bound on $\\\\Delta^*$)\", \"As you pointed out, we acknowledge that the formulation of the lower bound of the robustness would be the ultimate goal. If possible, by finding the lower and upper bounds, we would fully understand how much robustness in action can be achieved by using the flatness on parameter space.\", \"**First, we here emphasize the meaning of upper bound that achieved in our work.**\", \"The key point in the Proposition is that $\\\\Delta^*$ represents the maximum allowable action perturbation under which the policy $\\u03c0_{\\u03b8^{\\u2217}}$ maintains its optimal expected cumulative reward, the largest perturbation magnitude for which the policy remains robust.\", \"Addressing concern about $\\\\Delta^*$ \\u2192 0: Let\\u2019s consider the components $\\\\Delta^*$ :\", \"$\\\\mathcal{E}$ is positive by Definition 1. If $\\\\mathcal{E}$ = 0, implying that reward is not flat, which contradicts to the flat reward maxima. We assume the given flat reward on the parameter space, thus $\\\\mathcal{E}>0$.\", \"$||J(\\u03b8^\\u2217)\\u2225$ is non-negative unless the policy is completely insensitive to parameter change. If $||J(\\u03b8^\\u2217)\\u2225=0$, the policy is constant with respect to $\\\\theta$. We want to point out that the Jacobian is the derivative of policy with respect to $\\\\theta$, not a derivative of reward (or loss). Therefore, even on reward maxima, the Jacobian does not have to be zero-forced.\", \"For the higher-order term of $\\\\mathcal{O}(\\\\mathcal{E}^2)$, the term is related to the 2nd and higher order of derivatives of policy with respect to $\\\\theta$. Thus, a zero value for the term indicates that the policy is fixed to be constant w.r.t. $\\\\theta$; deep policy model outputs same value even with the changes in parameter space. Such cases do not happen in a practical training.\", \"Therefore, we believe that the case of $\\\\Delta^*$ \\u2192 0 would not happen.\", \"**Let us show our insight on achieving the lower and upper bounds, possible done in the future work.**\", \"The problem is to find how the variations in parameter space affects to the variations in action space; bridging the parameter variations to the output variation of policy network. With some further formula, it is a problem of converting a sphere around $\\\\theta^*$ (flat reward region in parameter space) into the corresponding closed region of action space around $a$ (a clean region) who keeps the reward be flat.\", \"Unfortunately, it is intractable to exactly formulate the region on the action space for a given sphere in parameter space. For a starting point of the analysis in the future work, when assuming a linear model, we expect to find the exact solution of the flat region on the action space (no needs for bounds, but giving exact solution). Briefly, it would say that *\\u201cfor a flat reward region on the parameter space, there exists a action region where all possible actions in the region give flat reward\\u201d.*\", \"As a further direction for deep models, we guess that the exact solution of the action region is intractable, but we imagine that a subset and superset can be found. By saying the subset and superset, we expect to provide the lower and upper bound of the flatness on the action space. We think that our Proposition 1 would be a preliminary form of the upper bound.\", \"To our opinion, we think that our theoretical claims are not perfect, but show valuable insights for linking between flat reward in parameter to action robustness. We greatly thank that the claims are indeed improved by your suggests for considering the cumulative reward form.\", \"We appreciate your insightful feedback and we hope that this clarification alleviates your concerns.\", \"Thank you once again for taking the time to review our work. We hope our response clarifies the raised concerns.\"]}", "{\"title\": \"Official Comment by Authors\", \"comment\": \"**Weakness 2-1: Further ablations (additional comparison with RARL)**\\n\\n- We have included comparisons with additional baseline RARL (Robust Adversarial Reinforcement Learning) in our main experiments (in the submitted version, we add RARL only in complexity comparison, but we added all robustness experimental results in the revised paper).\\n- In the revised paper, we added RARL results in Section 5, Experimental Results. We confirm that SAM+PPO consistently outperforms RARL in almost all robustness experiments, i.e., a wider range of robustness in friction-mass joint perturbation in Figure 6, a better robustness against reward perturbations in Table 2). RARL shows the best performance in the action robustness of the Hopper-v3 case (in Table 1), but it does not change the superiority of SAM+PPO in many cases.\\n- We take the \\u2018action robustness\\u2019 part of Table 1 of the main paper, as follows:\\n\\n| Perturbation | Metric | HalfCheetah-v3 | | Hopper-v3 | | Walker2d-v3 | |\\n| --- | --- | --- | --- | --- | --- | --- | --- |\\n| | | Nominal | Perturbed | Nominal | Perturbed | Nominal | Perturbed |\\n| Action Noise \\u03c3 = 0.2 | PPO | 4758 | 1469(\\u22123289) | 3217 | 1467(\\u22121750) | 4883 | 607(\\u22124276) |\\n| | RNAC | 5484 | 2014(\\u22123470) | 3445 | 1321(\\u22122124) | 4147 | 652(\\u22123495) |\\n| | RARL | 4996 | 3412(\\u22121584) | 2819 | 1645(\\u22121174) | 4020 | 764(\\u22123256) |\\n| | SAM+PPO | **6523** | **4949(\\u22121574)** | **3766** | **2312(\\u22121454)** | **5129** | **2033(\\u22123096)** |\\n\\n**Weakness 2-2: Further ablations (TRPO with SAM)**\\n\\n- We have considered TRPO as a new baseline and demonstrated how well SAM works in conjunction with TRPO.\\n- Specifically, we extended our study by integrating SAM with TRPO, resulting in SAM+TRPO. TRPO uses trust region optimization to ensure stable policy updates by directly constraining the KL divergence between the old and new policies. As detailed in Appendix D, SAM+TRPO outperforms standard TRPO in several environments, indicating that the benefits of SAM are not limited to PPO. The results show that SAM enhances the robustness of TRPO, suggesting that promoting flatness in the loss landscape is beneficial across different optimization frameworks.\\n| Perturbation | Metric | HalfCheetah-v3 | | Hopper-v3 | | Walker2d-v3 | |\\n| --- | --- | --- | --- | --- | --- | --- | --- |\\n| | | Nominal | Perturbed | Nominal | Perturbed | Nominal | Perturbed |\\n| Action Noise \\u03c3 = 0.2 | TRPO | 4805 | 1502 (\\u22123303) | 3118 | 1452 (\\u22121666) | 4975 | 603 (\\u22124372) |\\n| | SAM+TRPO | 5502 | 3975 (\\u22121527) | 3547 | 2313 (\\u22121234) | 5097 | 2052 (\\u22123045) |\\n| Mass Scale Factor 1.2 | TRPO | 4837 | 3865 (\\u2212972) | 3215 | 1556 (\\u22121659) | 4957 | 782 (\\u22124175) |\\n| | SAM+TRPO | 5562 | 5210 (\\u2212352) | 3499 | 3508 (+9) | 5205 | 5284 (+79) |\\n| Friction Coefficient 0.88 | TRPO | 4723 | 4774 (+51) | 3075 | 1580 (\\u22121495) | 4996 | 4756 (\\u2212240) |\\n| | SAM+TRPO | 5562 | 5539 (\\u221223) | 3498 | 2728 (\\u2212770) | 5073 | 5134 (+61) |\\n\\n**Weakness 2-3: Further ablations (SAM+PPO in other environments)**\\n\\n- Furthermore, we conducted additional experiments on environments provided by Open-AI Gym, including environments with discrete action space, such as CartPole and LunarLander.\\n- The new experimental results are included in Appendix D. We discuss how SAM+PPO performs in these settings and compare it with the provided baselines to show how SAM+PPO performs in settings with different action dynamics (we use action noise with $\\\\sigma = 0.2$).\\n\\n| Algorithm | CartPole-v1 | | LunaLander-v2 | |\\n| --- | --- | --- | --- | --- |\\n| | Nominal | Perturbed | Nominal | Perturbed |\\n| PPO | 500 | 464(-36) | 200 | 175(-25) |\\n| SAM+PPO | 500 | **481(-19)** | **200** | **188(-12)** |\\n- Also, for the noisy reward cases, SAM+PPO shows the gains (we use reward noise with $\\\\sigma = 0.1$; the noise is added in the training as done in the main experiment).\\n\\n| Algorithm | CartPole-v1 | | LunaLander-v2 | |\\n| --- | --- | --- | --- | --- |\\n| | Nominal | Perturbed | Nominal | Perturbed |\\n| PPO | 500 | 432(-68) | 200 | 165(-35) |\\n| SAM+PPO | **500** | **458(-42)** | **200** | **182(-18)** |\\n- From the theoretical viewpoint, we point out that SAM is a model-agnostic optimization function that can be applied to any gradient-based algorithm, including other policy gradient (PG) and actor-critic (AC) methods like TRPO, A2C, and SAC.\\n- In our revised manuscript, we have added these results in Appendix D.1.\"}", "{\"comment\": [\"**Weakness 2: Justification for adding reward noise during training**\", \"Let us anticipate a case where the training is done without reward noise, but the evaluation is done with reward noise. Adding reward noise during evaluation would NOT affect the agent's behavior because the policy decisions are made before observing the rewards. Thus, the noisy version of rewards would be observed in testing, which does not provide meaningful insights into the policy\\u2019s robustness. That is why we adopt noisy rewards during the training phase.\", \"Also, in practical scenarios, it is important to guarantee the robustness of agents against noisy reward observations. Agents often operate in real-world environments where the reward signals during training are noisy due to measurement errors or uncertainties. Training agents under such conditions and evaluating them in the nominal environment allows us to assess their ability to learn effective policies despite these challenges.\", \"**Weakness 3: About the preliminary experiments in the Introduction**\", \"We appreciate your careful suggestion about the preliminary experiments in the Introduction. First, we intend to provide a conceptual understanding of \\u2018action robustness\\u2019 by showing the experiments. When navigating with a risk of erroneous actions, an agent has to keep the margin space to the obstacles. Therefore, we believe that the 2D maze environment clearly shows how the agent behaves to guarantee the robustness of the action by avoiding the risky narrow path.\", \"We carefully considered its placement and aimed to ensure that it enhances the reader's understanding of the motivation behind our research.\", \"To address your concern about the experiment feeling out of place, we have revised the introduction to better integrate the preliminary experiment:\", \"We have refined the transition into the preliminary experiment to ensure it flows naturally from the background and motivation. We clarified the purpose of the preliminary experiment, emphasizing how it illustrates the necessity for our proposed method\", \"Also, we have redesigned Figure 1 by adding the mini-figure that has increased the zoom level, to make key features more visible, illustrating the essence of the preliminary experiments.\", \"**Question 1: Discussions on Walker2d-v3 with high friction**\", \"It is challenging to fully understand the empirical results, particularly in the high friction regime. However, we have tried our best to explain the results as follows:\", \"First of all, compared with HalfCheetah-v3 and Hopper, the Walker2d-v3 environment is particularly sensitive to changes in friction due to the complex dynamics of balancing and coordinating two legs.\", \"Second, the increasing friction from low to high can be understood as the dramatic changes in environments. The agent of Walker2d-v3 has its center of mass at the upper side of its body compared with HalfCheetah-v3 and Hopper-v3. Thus, the high friction makes the agent easily fall over.\", \"Third, SAM does not directly perturb actions but indirectly perturbs actions via perturbing the policy parameters. So, SAM can experience as various as actions that might be found by perturbing policy parameters. In contrast, other action robust RL explicitly perturbs the action to experience the worst case.\", \"Based on the aspects above, we conjecture that SAM+PPO focuses less on experiencing high friction cases, which are extreme perturbations in environments.\", \"However, we want to point out that SAM+PPO can widely experience actions, transitional probability, and reward shifting by \\u201cindirectly\\u201d perturbing the policy parameters. As empirical evidence, SAM+PPO outperforms PPO in the various factors of perturbations, but RNAC or RARL shows comparably limited gains over PPO (referring to Figure 6 and Table 1 in the main paper).\"]}", "{\"comment\": \"Thank you for your positive feedback and recommendation. I appreciate your time and effort in reviewing my work.\"}", "{\"comment\": [\"**Discussion**\", \"**Weakness 4~6: Main Idea \\u2018SAM\\u2019 is lacking**\", \"Thank you for pointing out the need for a deeper discussion of SAM (Sharpness-Aware Minimization). We acknowledge that the original explanation may not have sufficiently conveyed the rationale behind the method. we have significantly expanded this section and included detailed explanations in **Appendix C**.\", \"We elaborate on the optimization process of the min-max objective in Equation 8, including step-by-step explanations and the inclusion of the pseudocode (Algorithm 1 in Appendix C). This provides clarity on how SAM is integrated with PPO in our approach. Specifically, integrating SAM into PPO requires modifying the standard optimization steps to account for the perturbation $\\\\epsilon$. This involves:\", \"Computing the base gradient $g_{\\\\theta}=\\\\nabla_{\\\\theta}\\\\mathcal{L}(\\\\theta)$ at the parameter $\\\\theta$\", \"Calculating the worst-case perturbation $\\\\epsilon^* = \\\\rho g_{\\\\theta}/||g_{\\\\theta}||$\", \"Performing an additional forward pass to obtain the loss computed at the perturbed parameter: $\\\\mathcal{L}(\\\\theta+\\\\epsilon^*)$\", \"Computing the gradient $g_{\\\\theta^*}=\\\\nabla_{\\\\theta*}\\\\mathcal{L}(\\\\theta^*)$, where $\\\\theta^*=\\\\theta+\\\\epsilon^*$ (at the perturbed parameter)\", \"Updating the policy parameters using the gradient of the perturbed loss, i.e., $\\\\theta \\\\leftarrow \\\\theta - \\\\alpha g_{\\\\theta^*}$, where $\\\\alpha$ is the learning rate.\", \"This process introduces additional gradient computations in each optimization step, effectively doubling the number of forward and backward passes compared to standard PPO.\", \"**Reason Why Is Chosen in the Direction of the Gradient:** While the inner maximization problem theoretically considers all possible directions within the $\\u03c1$-ball around $\\u03b8$, choosing \\u03f5 in the direction of the gradient $g_\\u03b8$ is a first-order approximation that captures the worst-case perturbation efficiently. This approximation is rooted in the Taylor expansion of the loss function. The direction of the gradient indicates the direction in which the loss increases most rapidly. By perturbing $\\u03b8$ in this direction, we effectively approximate the maximum increase in loss within the allowed perturbation magnitude $\\u03c1$.\", \"**Prior Work Demonstrating the Efficacy of The Method SAM:** Foret et al. (2020): The prior SAM-related works have been focused on supervised learning, particularly in computer vision. From the pioneering work of SAM [1], the authors have provided extensive simulation results showing that SAM improves visual classification models in CIFAR-10, CIFAR-100, Flower, ImageNet, etc. It widely proves the efficacy of flat minima in supervised learning. However, it is not the only method to find flat minima. Another work called SWA [2] employs a model-averaging method to find a flatter loss surface, showing that SWA can effectively find flat minima. However, SAM is more widely used than SWA due to its explicit objective function of searching for flat minima. Another learning problem where flat minima works well is the domain generalization of computer vision. As shown in SWAD [3], flat minima are theoretically proven to be a well-generalized domain shift of images, e.g., photograph to sketch, along with outstanding domain generalization performance. For some theoretical studies, [4] further explores the theoretical underpinnings of SAM and demonstrates its effectiveness in different settings. However, we want to point out that the in-depth understanding of flat loss is strongly focused on the image-based supervised learning settings, not on RL society.\", \"For the context of RL, as described in the Related Work section [5], a few works have recently reported that the flat reward probably improves the performance of RL. However, the prior work shows the following limitations: It lacks a formal bridge between the flatness in the reward landscape and the robustness in RL, and the effectiveness of flat reward was not carefully examined by varying multiple key perspectives of RL, i.e., action, transition probability, and reward.\", \"**SAM visualizations:** We want to say that our visualizations in Fig. 7 are the common way to visualize flatness. In the case of supervised learning (in the original paper of SAM), the difference is that it seeks \\u2018flat minima\\u2019 (our visualizations show a \\u2018flat maxima\\u2019 of rewards). It is hard to provide a conventional visualization of flat loss surface here in the rebuttals. Still, we hope to provide some related papers with standard visualizations of loss function: Fig. 1 in [2] (2d visualization), Fig. 3 in [3] (plot-based visualization).\"], \"title\": \"Official Comment by Authors\"}", "{\"comment\": \"Thank you for your positive feedback and for recommending acceptance. We appreciate your thorough review, which helped us improve the paper.\"}", "{\"summary\": \"The paper investigates the relationship between flat reward maxima in policy parameter space and the robustness of reinforcement learning (RL) agents. It claims that flatter reward maxima lead to more robust policies, particularly against action perturbations. The paper presents a theoretical proposition linking flat reward to action robustness and supports this claim through empirical experiments in MuJoCo environments (e.g., Hopper-v3, Walker2d-v3, HalfCheetah-v3). The authors demonstrate that an RL algorithm enhanced with Sharpness-Aware Minimization (SAM), called SAM+PPO, consistently outperforms standard PPO and a recent robust RL baseline (RNAC) in various robustness tests, including action noise, transition probability changes, and reward function variations. The paper also provides visualizations and quantitative measurements of reward surfaces, further confirming the link between flatness and robustness.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"This paper provides a formal link between flat reward surfaces and robustness in policy space. Proposition 1 establishes a clear theoretical foundation for the paper's main claim.\", \"The authors comprehensively test SAM+PPO across multiple challenging environments and scenarios, including noisy actions and varying transition probabilities, to demonstrate robustness.\", \"The authors compare SAM+PPO with RNAC, PPO, and RARL, which shows both performance and computational efficiency, which strengthens their findings.\", \"The use of reward surface visualizations and flatness metrics strengthens the paper's argument by providing visual and quantitative evidence for the flatness achieved by SAM+PPO.\"], \"weaknesses\": [\"While SAM is shown to be effective, the paper lacks a discussion of its potential limitations, such as computational overhead or sensitivity to hyperparameter tuning.\", \"The justification for reward noise being added during training for reward function robustness evaluation could be clearer: The paper mentions this difference in methodology but could expand on why this is necessary for a valid evaluation.\", \"I don't know if the preliminary experiment is best placed in the introduction, it feels a bit out of place for me.\", \"typos 234 \\\"objeective\\\", 249 \\\" funciton\\\"\"], \"questions\": [\"Do you have an intuition on why SAM doesn't perform better on Walker2d-v3 for high friction factor?\", \"Have you tested SAM+PPO on non-MuJoCo environments to assess robustness in discrete action spaces or varying reward structures?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper explores the impact of flat minima in reinforcement learning (RL), linking flatter reward surfaces to improved model robustness. The authors show that flatter rewards lead to more consistent actions despite parameter changes, enhancing robustness against variations in state transitions and reward functions. The authors show through extensive experiments to confirm that flatter rewards significantly bolster RL model performance across diverse scenarios.\\n\\n-------------------\", \"after_the_rebuttal\": \"The authors have addressed some of my concerns. I raised the score.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"Provide a link of flat reward to action robustness. The authors show this through both theoretical results in section 4, and various experiment results. The motivation of having a robust objective is good. The theoretical result seems correct.\", \"Positive experiment results showing the benefit of optimizing for a flat reward maxima. The authors show this through different experiment settings: variation to physics properties of the underlying MDP, and visualization of the reward surface.\"], \"weaknesses\": [\"The performance of SAM + PPO is mixed in comparisons to the baselines, e.g. some visible ones at Fig 5.c, 4.b.\", \"Ablations are not provided to understand how such an objective can bring benefits in comparisons to similar approaches, e.g. RNAC or robust RL.\"], \"questions\": [\"Is the perturbation domain $\\\\rho$ in Eq.8 known to the agent? Probably the optimization of the objective in Eq.8 needs elaboration, and with pseudo-code.\", \"Why in \\\"Nominal\\\" SMA+PPO still has a higher reward, e.g. Table 1+2, Fig. 3,4. Similarly, experiment in 5.2, why when action noise is small, i.e. even equal to 0, SAM+PPO still performs better than the others, because the objectives of PPO and SAM+PPO would converge to the same one? And in 5.3, SAM+PPO has a higher return, while with variation in Friction Coefficient shows mixed results.\", \"Joint variation of friction and mass shows quite clear that SAM+PPO is performing better than baselines, except on Walker2d-v3 with a mixed result. Can the authors elaborate on why or provide ablation to explain the mixed performance of SAM+PPO?\", \"The proof of proposition 1 is a bit not standard. The policy is sometimes referred as a distribution, but sometime used as a deterministic mapping. It needs revised.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": [\"We sincerely appreciate your careful and thorough feedback to improve our work further. Also, thank you for waiting for our response. Within the limited rebuttal period, we have tried our best to fully address your concerns and revise the manuscript to accept your constructive suggestions.\", \"**Comment 1, 2, 6: Revision for the form of cumulative reward**\", \"Fortunately, by accepting your careful comments, we have revised the manuscript to represent the loss function of PPO in the form of cumulative rewards.\", \"Starting by the confusion from Eq. (28), there are many related parts to the revision for the cumulative reward form. Specifically, we have used the revised formulation, i.e.,$\\\\max_{\\\\pi} \\\\min_{\\\\|\\\\delta_t\\\\| \\\\leq \\\\beta} \\\\mathbb{E}{s \\\\sim p, a \\\\sim \\\\pi} \\\\left[ \\\\sum_{t=0}^\\\\infty \\\\gamma^t r(s_t, a_t + \\\\delta_t) \\\\right]$ , in the object function of Action robust MDP (Equation 2. in Section 3.). Also, by following the change, Definition 1,2, remark 1.1, 1,2, and the proof of Proposition 1 (in Appendix A) have been revised. Following are the corresponding revisions and their parts:\", \"Equation 2 : $\\\\max_{\\\\pi} \\\\min_{\\\\|\\\\delta_t\\\\| \\\\leq \\\\beta} \\\\mathbb{E} {p, \\\\pi} \\\\left[ \\\\sum_{t=0}^\\\\infty \\\\gamma^t r(s_t, a_t + \\\\delta_t) \\\\right]$\", \"Definition 1 (Equation 4) : $\\\\quad \\\\mathbb{E}{s{\\\\sim p}, a \\\\sim \\\\pi_{\\\\theta^{*}+\\\\epsilon}{(a|s)}} {\\\\left[ \\\\sum_{t=0}^\\\\infty \\\\gamma^t r(s_t,a_t) \\\\right]}$\", \"Definition 2 (Equation 5) : $\\\\mathbb{E}{s\\\\sim p,a\\\\sim\\\\pi_{\\\\theta^{*}}}{\\\\left[ \\\\sum_{t=0}^\\\\infty \\\\gamma^t r(s_t,a_t + \\\\delta_t) \\\\right]}$\", \"Remark 1.1 (Equation 7) : $\\\\mathbb{E}{s \\\\sim p, a \\\\sim \\\\pi_{\\\\theta}}{\\\\left[ \\\\sum\\\\textstyle_{t=0}^\\\\infty \\\\gamma^t r(s_t,a_t + \\\\delta_t) \\\\right]}$\", \"Appendix A\", \"$\\\\mathbb{E}{s \\\\sim p, a \\\\sim \\\\pi_{\\\\theta^* + \\\\epsilon}} \\\\left[ \\\\sum_{t=0}^\\\\infty \\\\gamma^t r(s_t, a_t) \\\\right] = r^*.$\", \"$\\\\left| \\\\mathbb{E}{s, a} \\\\left[ \\\\sum_{t=0}^\\\\infty \\\\gamma^t (r(s_t, a_t + \\\\delta_t) - r(s_t, a_t)) \\\\right] \\\\right| \\\\leq \\\\frac{L_r (\\\\| J(\\\\theta^*) \\\\| \\\\mathcal{E} + \\\\mathcal{O}(\\\\mathcal{E}^2))}{1 - \\\\gamma}$\", \"When $\\\\gamma=0$, as you pointed out, it becomes the previous version of the equation.\", \"We believe that the change to cumulative reward strictly corresponds to the actual policy training with the cumulative reward. We sincerely appreciate your careful suggestions.\", \"**Comment 3: Inappropriate reference to the proof**\", \"Thank you for pointing out the our mistake. We have revised the line as below, clearly stating that Proposition 1 and Remark 1.1 are from Section 4 of the main text, and the corresponding proof is from Appendix A.\", \"(Before revision: L899) \\u2018Based on Proposition 1, Remark 1.1, and the corresponding proof in Section 4 of the main text,\\u2019\", \"(After revision: L911) \\u2018Based on Proposition 1, Remark 1.1 in Section 4 of the main text, and the corresponding proof in Appendix A,\\u2019\", \"**Comment 4: Missing \\u201c.\\u201d on line 902**\", \"We added missing \\u201c.\\u201d on line 914 of the revised manuscript\", \"**Comment 5 : Computational overhead and possible efficient tricks of applying SAM**\", \"To the best of our understanding, you refer \\u201cthe trick\\u201d from the original SAM paper, as the approximation of SAM gradient via first-order estimation (referring Eq. 3 in the original SAM paper). The straightforward computation of SAM gradient requires Hessian computation, but it can be relieved by using \\u201cthe trick\\u201d, which uses two times gradient computations (at $\\\\theta$, and at perturbed $\\\\theta$). The trick is exactly what we used in our evaluations. Also, our algorithmic description is also based on the trick.\", \"(If you are not referring the trick described above, please let us know! Until end of the discussion period, we will try to do our best for providing meaningful results)\", \"Also, the efficient method is widely-used in SAM-related researches, because the naive computation of Hessian hinders massive experiments of deep models.\", \"**Consequently, our complexity analysis is already based on the efficient version of SAM.**\", \"We have found other related methods in public implementations as alternative ways to compute SAM gradient, but we do not use it due to the lack of peer-reviewed level of reliability.\"]}", "{\"title\": \"Response to third comment\", \"comment\": \"Thanks for giving some discussion on Prop 1 vs Remark 1.1. Can you please see my earlier concerns about the reward vs return objective? I see again in Eq7 that you're maximizing just the one step reward. Could you elaborate a bit on that? Thanks for clarifying Rmk 1.2, it makes more sense and I have a better mental picture of it now.\\n\\nI think adding RNAC, RARL, and even SAM+TRPO improves the experimental nature of the paper.\\n\\nCan you please comment on how there could be positive reward shifts when perturbations are present (e.g. in Table 1)? Is this statistically significant? How many runs were performed, and what is the standard deviation? Nevertheless, the addition of the other algorithms here helps gain a better understanding of the broader picture.\\n\\n\\n**Minor Ref Issues**: \\n\\nThere are still missing spaces in front of citations in Sect. 2. \\n\\nAlso, I noticed a few refs were out of date (arxiv instead of published versions), e.g. \\\"Aravind Rajeswaran, Sarvjeet Ghotra, Balaraman Ravindran, and Sergey Levine. Epopt: Learning\\nrobust neural network policies using model ensembles. arXiv preprint arXiv:1610.01283, 2016\\\" is in ICLR 2017.\\n\\n\\\"On large-batch training for deep learning: Generalization gap and sharp minima\\\" is also in ICLR 2017.\\n\\nSame for Ota et al, here's an up to date ref:\\n@article{ota2024aframework,\\n author = {Ota, Kei and Jha, Devesh K. and Kanezaki, Asako},\\n title = {A Framework for Training Larger Networks for Deep Reinforcement Learning},\\n journal = {Machine Learning},\\n year = {2024},\\n month = jun,\\n day = {05},\\n issn = {1573-0565},\\n doi = {10.1007/s10994-024-06547-6},\\n url = {https://doi.org/10.1007/s10994-024-06547-6},\\n}\\n\\nThe Sutton & Barto book citation seems off.\"}", "{\"comment\": [\"We thank the reviewer for their thoughtful feedback and valuable suggestions. As detailed below, we have addressed the weaknesses and questions raised.\", \"**Weakness 1-1: Potential limitations (computational overhead)**\", \"SAM+PPO requires additional computations in the optimization process. When describing in detail, let us elaborate on the steps to optimize the cost function in Eq. 8, which is the SAM+PPO\\u2019s objective function.\", \"Integrating SAM into PPO requires modifying the standard optimization steps to account for the perturbation \\u03f5. This involves:\", \"Computing the base gradient $g_{\\\\theta}=\\\\nabla_{\\\\theta}\\\\mathcal{L}(\\\\theta)$ at the parameter $\\\\theta$\", \"Calculating the worst-case perturbation $\\\\epsilon^* = \\\\rho g_{\\\\theta}/||g_{\\\\theta}||$\", \"Performing an additional forward pass to obtain the loss computed at the perturbed parameter: $\\\\mathcal{L}(\\\\theta+\\\\epsilon^*)$\", \"Computing the gradient $g_{\\\\theta^*}=\\\\nabla_{\\\\theta*}\\\\mathcal{L}(\\\\theta^*)$, where $\\\\theta^*=\\\\theta+\\\\epsilon^*$ (at the perturbed parameter)\", \"Updating the policy parameters using the gradient of the perturbed loss, i.e., $\\\\theta \\\\leftarrow \\\\theta - \\\\alpha g_{\\\\theta^*}$, where $\\\\alpha$ is the learning rate.\", \"This process introduces one additional gradient computation in each optimization step, doubling the required number of forward and backward computations compared to the standard PPO.\", \"In big $\\\\mathcal{O}$ notation, the per-iteration computational complexity increases from $\\\\mathcal{O}(N)$ for PPO to $\\\\mathcal{O}(2N)$ for SAM+PPO, where $N$ is the number of parameters.\", \"In actual training, we have additionally measured the training time per optimization step (or iteration) of PPO and SAM+PPO. As shown in the following table, SAM+PPO incurs approximately 1.5 times larger time per model update for all experiments, which seems a reasonable trade-off considering the robustness gains.\", \"| Algorithm | HalfhCheetah | Hopper | Walker |\", \"| --- | --- | --- | --- |\", \"| | Train | Train | Train |\", \"| PPO | 1.22 | 0.13 | 0.2 |\", \"| SAM+PPO | 1.5(x1.83) | 0.23(\\u00d71.76) | 0.24(\\u00d71.20) |\", \"In the revised manuscript, we have added the aforementioned step-by-step descriptions of SAM+PPO\\u2019s optimization at Algorithm 1 in Appendix C.\", \"Also, in the revised manuscript, we have added the training time per iteration at Table C.1 in Appendix C.\", \"**Weakness 1-2: Potential limitations (sensitivity of hyperparameters)**\", \"A newly introduced hyperparameter for SAM+PPO beyond PPO is $\\\\rho$, i.e., the radius of the perturbations. The value of \\u03c1 directly affects the degree of flatness sought in the reward landscape. A larger \\u03c1 encourages the optimizer to find flatter maxima, potentially enhancing robustness but at the risk of training instability (too wide flat minima are hard to find). Conversely, a smaller \\u03c1 may result in less robustness gain. Selecting an appropriate \\u03c1 is crucial not only in our RL cases but also in other learning tasks.\", \"We have additionally provided the sensitivity of the performance of Hopper by changing $\\\\rho$:\", \"| Algorithm | Nominal | Action Noise \\u03c3 = 0.05 | Action Noise \\u03c3 = 0.1 | Action Noise \\u03c3 = 0.15 | Action Noise \\u03c3 = 0.2 | Action Noise \\u03c3 = 0.25 |\", \"| --- | --- | --- | --- | --- | --- | --- |\", \"| PPO | 3217 | 2083 | 1792 | 1577 | 1467 | 1284 |\", \"| SAM+PPO($\\\\rho$ 0.001) | 3329 | 2214 | 1619 | 1515 | 1291 | 1085 |\", \"| SAM+PPO($\\\\rho$ 0.005) | 3294 | 2795 | 1853 | 1305 | 1077 | 972 |\", \"| SAM+PPO($\\\\rho$ 0.01) | **3766** | **3732** | **3589** | **3123** | **2312** | **1917** |\", \"| SAM+PPO($\\\\rho$ 0.05) | 2191 | 2019 | 2038 | 1920 | 1732 | 1782 |\", \"| SAM+PPO($\\\\rho$ 0.1) | 2735 | 2716 | 2306 | 2077 | 1781 | 1739 |\", \"We have selected $\\\\rho=0.01$ for the optimized hyperparameter. It is hard to say that SAM works well in a broad range of $\\\\rho$ values, but it is quite common sensitivity of the perturbation radius observed in other SAM-based experiments for visual classifications.\", \"In addition, we found that the choice of $\\\\rho=0.008$, which is similar to the choice for Hopper, is shown to be optimal for HalfCheetah and Walker2d. This means that a similar level of perturbation radius is required for the three environments.\", \"For other hyperparameters of PPO, the application of SAM on PPO does not inherently change the role or sensitivity of PPO's original hyperparameters. Thus, we have simply utilized the same hyperparameters for PPO and SAM+PPO, relieving the efforts to tune the hyperparameters.\", \"We have added the sensitivity results of $\\\\rho$ in Appendix C.3.\"]}", "{\"comment\": [\"**Comment 7: How could there be positive reward shifts in perturbed evaluation?**\", \"First of all, we have performed 100 evaluation runs to compute the average performance. We conjecture that the number of trials is quite large to trust the mean performance.\", \"However, standard deviations are quite large in our evaluations (as shown in the colored interval of each plot in the figures). Also, we want to point out that RL environments commonly show fairly larger variances than the conventional classification testing (in accuracy).\", \"Also, the observed \\u2018positive reward shifts\\u2019 are indeed much smaller than the reward performance values, i.e., about tens (positive shift) for thousands (performance) in values.\", \"By keeping these in our mind, we conclude that **i)** The mean performance is statistically reliable, **ii)** Some cases with positive shifts do not meaningful gains when seeing large reward performance and variances.\", \"However, from a high-level viewpoint, this kind of minimal degradations or even a slight gain can occur when the changes in environment is not challenging to agents. As widely known, deep models are naturally capable to generalize on unseen samples/tasks/environments, thus such outliers with a small positive shifts can be possible.\", \"**Comment 8: Minor Ref Issues**\", \"Thank you for providing details with Section 2 and references. We reviewed the whole manuscript for missing spaces before citations and identified any outdated references\", \"Update outdated references:\", \"Rajeswaran, A., Ghotra, S., Ravindran, B., & Levine, S. (2017). EPOpt: Learning Robust Neural Network Policies Using Model Ensembles. In *Proceedings of the International Conference on Learning Representations (ICLR)*.\", \"Keskar, N. S., Mudigere, D., Nocedal, J., Smelyanskiy, M., & Tang, P. T. P. (2017). On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima. In *Proceedings of the International Conference on Learning Representations (ICLR)*.\", \"Ota, K., Jha, D. K., & Kanezaki, A. (2024). A Framework for Training Larger Networks for Deep Reinforcement Learning. *Machine Learning*. Advance online publication. https://doi.org/10.1007/s10994-024-06547-6\", \"Sutton, R. S., & Barto, A. G. (2018). *Reinforcement Learning: An Introduction* (2nd ed.). MIT Press.\", \"We sincerely appreciate your careful comments.\", \"**Comment 9: Combining Table 1 and Table 7**\", \"We appreciate your thoughtful suggestion on the experiments to be more comprehensive.\", \"We also think the combined table will be more comprehensive. However, due to the strict page limit of 10 pages, we were unable to include the combined table in the main text without exceeding the allowed length. Instead, we added a line for clearly indicating that the additional experiment of applying SAM on other RL algorithm is presented in the appendix.\", \"(L332) \\u2018Additional evaluations of other RL algorithm and SAM enhanced version are presented on Appendix D.2\\u2019\", \"**Comment 10: Bold font in Table 1**\", \"We used boldface type fonts in Table 1 to highlight two key aspects for each environment.\", \"Highest performance on \\u2018Nominal\\u2019 result,\", \"Smallest performance degradation on \\u2018Perturbed\\u2019 result.\", \"To improve clarity, we have included a brief explanation in the text accompanying the table to describe how the highlighting is applied\", \"(L463) \\u2018Bold face is used for Highest performance in \\u2019Nominal\\u2019, smallest performance degradation in \\u2019Perturbed\\u2019\\u2019\", \"We have corrected the usage of boldface fonts in Table 1.\", \"**Comment 11: Revising Figure 1**\", \"Thank you for suggestions to enhance the visual presentation on Figure 1.\", \"We removed axis ticks and labels in the figure, as you mentioned. We have plotted the two figures in a single, large figure to provide direct comparison, but found that the intention to highlight the narrow path for SAM+PPO avoid, be reduced.\", \"Although maintaining the two figures separately, we made Figure 1 more clear and interpretable.\"]}", "{\"title\": \"Official Comment by Authors\", \"comment\": \"**Weakness 5 & Question 1: Tests in additional environments and baselines**\\n\\n- **Additional baselines:** We have included comparisons with additional baseline RARL (Robust Adversarial Reinforcement Learning) in our main experiments (in the submitted version, we added RARL only in complexity comparison, but we added all robustness experimental results in the revised paper).\\n- In the revised paper, we added RARL results in Section 5. Experimental Results. We confirm that SAM+PPO consistently outperforms RARL in almost all robustness experiments, i.e., a wider range of robustness in friction-mass joint perturbation in Fig. 6, a better robustness against reward perturbations in Table 2). RARL shows the best performance in the action robustness of the Hopper case (in Table 1), but it does not change the superiority of SAM+PPO in many cases.\\n- We take the \\u2018action robustness\\u2019 part of Table 1 of the main paper, as follows:\\n\\n| Perturbation | Metric | HalfCheetah-v3 | | Hopper-v3 | | Walker2d-v3 | |\\n| --- | --- | --- | --- | --- | --- | --- | --- |\\n| | | Nominal | Perturbed | Nominal | Perturbed | Nominal | Perturbed |\\n| Action Noise \\u03c3 = 0.2 | PPO | 4758 | 1469(\\u22123289) | 3217 | 1467(\\u22121750) | 4883 | 607(\\u22124276) |\\n| | RNAC | 5484 | 2014(\\u22123470) | 3445 | 1321(\\u22122124) | 4147 | 652(\\u22123495) |\\n| | RARL | 4996 | 3412(\\u22121584) | 2819 | **1645(\\u22121174)** | 4020 | 764(\\u22123256) |\\n| | SAM+PPO | **6523** | **4949(\\u22121574)** | **3766** | 2312(\\u22121454) | **5129** | **2033(\\u22123096)** |\\n\\n- To answer Question 1, we have considered TRPO as a new baseline and demonstrated how well SAM works in conjunction with TRPO.\\n- Specifically, we extended our study by integrating SAM with TRPO, resulting in SAM+TRPO. TRPO uses trust region optimization to ensure stable policy updates by directly constraining the KL divergence between the old and new policies. As detailed in Appendix D, SAM+TRPO outperforms standard TRPO in several environments, indicating that the benefits of SAM are not limited to PPO. The results show that SAM enhances the robustness of TRPO, suggesting that promoting flatness in the loss landscape is beneficial across different optimization frameworks.\\n\\n| Perturbation | Metric | HalfCheetah-v3 | | Hopper-v3 | | Walker2d-v3 | |\\n| --- | --- | --- | --- | --- | --- | --- | --- |\\n| | | Nominal | Perturbed | Nominal | Perturbed | Nominal | Perturbed |\\n| Action Noise \\u03c3 = 0.2 | TRPO | 4805 | 1502 (\\u22123303) | 3118 | 1452 (\\u22121666) | 4975 | 603 (\\u22124372) |\\n| | SAM+TRPO | **5502** | **3975 (\\u22121527)** | **3547** | **2313 (\\u22121234)** | **5097** | **2052 (\\u22123045)** |\\n| Mass Scale Factor 1.2 | TRPO | 4837 | 3865 (\\u2212972) | 3215 | 1556 (\\u22121659) | 4957 | 782 (\\u22124175) |\\n| | SAM+TRPO | **5562** | **5210 (\\u2212352)** | **3499** | **3508 (+9)** | **5205** | **5284 (+79)** |\\n| Friction Coefficient 0.88 | TRPO | 4723 | **4774 (+51)** | 3075 | 1580 (\\u22121495) | 4996 | 4756 (\\u2212240) |\\n| | SAM+TRPO | **5562** | 5539 (\\u221223) | **3498** | **2728 (\\u2212770)** | **5073** | **5134 (+61)** |\\n- **Additional environments:** Furthermore, we conducted additional experiments on environments provided by Open-AI Gym, including environments with discrete action space, such as CartPole and LunarLander.\\n- The new experimental results are included in Appendix D. We discuss how SAM+PPO performs in these settings and compare it with the provided baselines to show how SAM+PPO performs in settings with different action dynamics (we use action noise with $\\\\sigma = 0.2$).\\n\\n| Algorithm | CartPole-v1 | | LunaLander-v2 | |\\n| --- | --- | --- | --- | --- |\\n| | Nominal | Perturbed | Nominal | Perturbed |\\n| PPO | **500** | 464(-36) | **200** | 175(-25) |\\n| SAM+PPO | **500** | **481(-19)** | **200** | **188(-12)** |\\n- Also, for the noisy reward cases, SAM+PPO shows the gains (we use reward noise with $\\\\sigma = 0.1$; the noise is added in the training as done in the main experiment).\\n\\n| Algorithm | CartPole-v1 | | LunaLander-v2 | |\\n| --- | --- | --- | --- | --- |\\n| | Nominal | Noisy | Nominal | Noisy |\\n| PPO | **500** | 432(-68) | **200** | 165(-35) |\\n| SAM+PPO | **500** | **458(-42)** | **200** | **182(-18)** |\\n- From the theoretical viewpoint, we point out that SAM is a model-agnostic optimization function that can be applied to any gradient-based algorithm, including other policy gradient (PG) and actor-critic (AC) methods like TRPO, A2C, and SAC.\\n- In our revised manuscript, we have added these results in Appendix D.1.\"}", "{\"title\": \"Official Comment by Authors\", \"comment\": [\"We thank the reviewer for their thoughtful feedback and valuable suggestions. As detailed below, we have addressed the weaknesses and questions raised.\", \"**Writing**\", \"**Weakness 1: Fix grammar and overall structure**\", \"Thank you for highlighting the need for improved clarity and grammatical correctness. We have undertaken a thorough revision of the paper to address grammatical errors and enhance the overall structure. We have uploaded the revised paper.\", \"**Weakness 2: Visualizations of Definitions 1 & 2**\", \"We agree that visualizations can greatly aid in understanding formal definitions. In response, we have added illustrative figures to accompany Definitions 1 and 2, in the revised paper.\", \"**Weakness 3: Confused phrasing of $\\u0394^\\u2217$-robust and $\\u0394$-robust**\", \"We appreciate the reviewer's feedback pointing out the potential confusion in the phrasing of Proposition 1.\", \"The notation $\\\\Delta^*$ denotes the specific maximum perturbation magnitude for which the policy $\\\\pi_{\\\\theta^*}$ remains robust, given the $\\\\mathcal{E}$-flatness at $\\\\theta^*$. It is directly derived from the properties of $\\\\theta^*$ and provides a precise bound.\", \"In **Definition 2**, $\\u0394$-action robustness is defined in general terms, without specifying a particular value for $\\u0394$.\", \"In **Proposition 1**, $\\u0394^\\u2217$ indicates the specific value of $\\u0394$ that corresponds to the $\\\\mathcal{E}$-flat maximum $\\u03b8^\\u2217$.\", \"To enhance clarity, we have rephrased the proposition to make the logical implication explicit and to clarify the use of the notation $\\\\Delta^*$.\", \"If $\\\\theta^*$ is an $\\\\mathcal{E}$-flat reward maximum, then the policy $\\\\pi_{\\\\theta^*}$ is $\\\\Delta^*$-action robust, where:\", \"$\\\\Delta^* \\\\leq \\\\||J(\\\\theta^{*})\\\\||\\\\mathcal{E} + \\\\mathcal{O}(\\\\mathcal{E}^2)$\"]}", "{\"title\": \"Official Comment by Authors\", \"comment\": [\"**Weakness 2-4: Understanding how SAM+PPO enhances the robustness**\", \"The key essence of SAM to improve robustness is the indirect perturbations of actions, transitions, and rewards by perturbing model parameters. Thus, SAM can experience as various as actions that might be found by perturbing policy parameters. Therefore, SAM+PPO can be robust against any worst cases among the variations of actions, transitions, and rewards, leading to the generally well-generalized performance against the variations of diverse factors of environments.\", \"In contrast, other action-robust RL methods, including RNAC and RARL, explicitly perturb the transition probabilities to experience the worst case. Specifically, RNAC considers the uncertainties in the transition probability of the environment and does robust polity optimization under the worst-case expected return over the uncertainty set. RARL adds adversarial perturbations to actions, maximizing its expected return under the exposure of adversarial perturbations. Therefore, these algorithms are strongly tailored to be robust against transition probability or actions but do not aim to achieve broad robustness across possible environmental variations. That is why SAM+PPO generally outperforms other robust RL methods across various perturbed settings.\", \"**Question 1: Clarification on the Perturbation Domain $\\\\rho$ in Eq. 8**\", \"In brief, we have added the pseudocode of SAM+PPO in Appendix C to describe the training steps fully. In the algorithm, $\\\\rho$ is the radius of parameter perturbations $\\\\epsilon$, where the loss values within the radius are minimized (referring to Min-Max problem of Eq. 8). **In training, $\\\\rho$ is a kind of hyperparameter of SAM; thus the value is not given to the agent.**\", \"When briefly explaining the steps in the pseudocode in Appendix C:\", \"Computing the base gradient $g_{\\\\theta}=\\\\nabla_{\\\\theta}\\\\mathcal{L}(\\\\theta)$ at the parameter $\\\\theta$\", \"Calculating the worst-case perturbation $\\\\epsilon^* = \\\\rho g_{\\\\theta}/||g_{\\\\theta}||$\", \"Performing an additional forward pass to obtain the loss computed at the perturbed parameter: $\\\\mathcal{L}(\\\\theta+\\\\epsilon^*)$\", \"Computing the gradient $g_{\\\\theta^*}=\\\\nabla_{\\\\theta*}\\\\mathcal{L}(\\\\theta^*)$, where $\\\\theta^*=\\\\theta+\\\\epsilon^*$ (at the perturbed parameter)\", \"Updating the policy parameters using the gradient of the perturbed loss, i.e., $\\\\theta \\\\leftarrow \\\\theta - \\\\alpha g_{\\\\theta^*}$, where $\\\\alpha$ is the learning rate.\", \"We hope this clarifies the details of the training steps of our algorithm.\", \"**Question 4: Proof of Proposition 1 needs a further revision**\", \"We have revised the proof of Proposition 1 to ensure that the policy is consistently defined as a probability distribution throughout the proof.\", \"As following the standard formulations of policies in PPO and TRPO, we represent the policy $\\\\pi_{\\\\theta}(a|s)$ as the Gaussian distribution with mean action $\\\\mu_{\\\\theta}(s)$ and fixed covariance matrix $\\\\Sigma$, i.e., $\\\\pi_{\\\\theta}(a|s) = \\\\mathcal{N}(a; \\\\mu_{\\\\theta}(s), \\\\Sigma)$. This approach is a standard form to handle continuous action space with probability distribution of policies. As a special note, we newly assume the Lipschitz continuity of reward, which is widely used in the related analysis of the robust RL.\", \"We truly appreciate your time for reviewing our work. We hope our response clarifies the raised concerns.\"]}", "{\"comment\": \"Thank you for your encouraging feedback and for taking the time to review our response. We appreciate your support and consideration.\"}", "{\"title\": \"Response to reviewers\", \"comment\": \"Thank you again for such an in-depth response. This set of responses, as well as those to other reviewers have changed my mind a bit. I stand by my original view that this is an interesting first step in SAM-style ideas applied to RL, which I believe will open avenues for future theoretical and experimental contributions. However, the authors have now significantly improved the manuscript in *three* regards (1) experimental justification with additional baselines, (2) theoretical justification with expanded discussion and clarification (the \\\"reward to return\\\" fix was crucial!) and (3) an overall improvement in writing and presentation quality. Therefore, I am happy to recommend acceptance of this work and I look forward to seeing how it impacts the field.\"}", "{\"summary\": \"The paper presents a study on using sharpness-aware regularization to obtain robust reinforcement learning policies. Drawing a theoretical connection between flatness in the reward, action and parameter space to action-robust RL, the authors present both a theoretical justification and experiments to show that the proposed method achieves good robustness properties.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"The authors propose a simple yet intuitive approach for robust RL. I was somewhat surprised that this combination has apparently not been tried in the literature, but a brief literature survey has not brought up any similar algorithms. I actually think the authors are somewhat underselling their contributions here! While SAM has been used to train PPO before, the authors appropriately cite prior work here, previous papers have not drawn any connections to robust RL at all and the authors should feel entitled to proudly claim this connection as their connection! They do not merely provide theoretical backing, as far as I can tell, they make a connection that was wholly absent in cited work.\\n\\nThe theoretical statements are mostly correct as far as I can tell. See questions below however.\", \"weaknesses\": \"The main problem with the paper as it stands are writing problems and baseline comparisons.\\n\\nEspecially the beginning of the paper, abstract and introduction, suffer from very frequent grammar mistakes which make the paper much harder to read. I strongly encourage the author to revise the paper wrt to the writing.\\n\\nIn definition 1, I'm unsure if $\\\\epsilon$ is added to the policy, parameters or action? From the proof it seems this is a parameter perturbation, this should be stated directly. I think adding parentheses in the equation would already make this much clearer, as we have two nested subscripts here.\\nIn addition, the state is sampled from the policy, which seems strange?\\n\\nAs the theoretical statement depends on the Jacobian of the policy network, which is not bounded anywhere, I'm slightly skeptical that the theoretical results are sufficient to practically guarantee robust RL. Does the SAM objective guarantee or incentivize a flat Jacobian?\\n\\nGiven the surprisingly (?) bad results of RNAC - it barely seems to outperform PPO - I think it would be appropriate to apply SAM+PPO in the same environments as used in the RNAC paper. As far as I can tell, the code is available, so this should be feasible within the rebuttal timeline? If not, I will not hold this against the authors. I think it is important to verify that used examples are not cherry-picked to make the presented algorithm look stronger. This is the higher priority comment in terms of baseline comparisons.\\n\\nI would encourage the authors to present some additional baselines. I acknowledge that more baselines is a somewhat lazy comment. However, given that there are several different formulations of robust RL, I believe it would be helpful to pick a variety of environments and algorithms presented with different robust formulations for comparison to understand how well the algorithm does in comparison to others. This doesn't have to be many or complex environments, just a larger variety of formalisms. This is a soft concern and not a large barrier to acceptance for me.\\nBoth safe-control-gym [1] and Safety Gymnasium [2] provide a variety of tasks and implemented baselines to speed up experimentation.\\n\\n[1] https://github.com/utiasDSL/safe-control-gym\\n[2] https://github.com/PKU-Alignment/safety-gymnasium\", \"questions\": \"Is there a specific advantage to using PPO with SAM, or could any PG or even AC algorithm be used? It might be that the clipping approximation to the trust region synergizes well with the SAM objective? I think this is an optional extension to the paper.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This work proposes a new method to ensure robustness in RL based on variations in the loss landscape: \\\"SAM\\\": Sharpness-Aware Minimization. By posing the policy optimization as a min/max objective with respect to perturbations in the parameter space, the authors show robustness to changes in reward and dynamics. A theoretical result is given, linking parameter and reward robustness, and a diverse set of experiments on 3 MuJoCo environments is provided.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"This use of robustness in policy parameter space seems to be fairly new\", \"The experiments demonstrate a strong performance boost across a range of perturbations\", \"The visualizations in Figure 6 offer an interesting insight into the optimizations produced by PPO vs SAM+PPO. The Hopper example is quite striking. Could you elaborate on the distinction and sharp dropoffs seen there?\", \"Theory provides a potential link between flatness in parameter space and action robustness\", \"Provided a solid comparison wrt computational overhead / sample complexity and wall time versus other algorithms\", \"Figure 5 is quite nice, I think it should be emphasized\", \"Overall, the paper seems like a nice first step in the direction of understanding the relationship between robustness in reward, policy parameter, and dynamics spaces. The notion of \\\"flat rewards\\\" is an interesting one.\"], \"weaknesses\": [\"**Writing:**\", \"Overall, I think the clarity of the paper can be enhanced with a re-write fixing grammar and overall structure:\", \"It would be helpful for example, to also include some visualizations of the definitions 1 & 2.\", \"Also, Proposition 1 and the following remarks are not very clear. As an example, for Prop1, if I understand correctly, the result would be better phrased as \\\"if $\\\\mathcal{E}$-flat, then $\\\\Delta$-robust, with $\\\\Delta \\\\leq ...$ otherwise the current phrasing is a bit confusing.\", \"**Discussion:**\", \"The discussion of the main idea, \\\"SAM\\\" is lacking:\", \"After it is introduced in Sec 3.3, the authors give a way to solve the optimization problem in Eq (3) by their steps (i)-(iv). However, (to me at least), it is not clear why this method is used. Is there prior work demonstrating the efficacy of this method? Are there experiments or maybe some minimal example illustrating the utility of this setup? E.g. why is $\\\\epsilon$ chosen to be in the direction of the previously computed gradient, if theoretically it should represent an arbitrary direction in the ball.\", \"At the very least, can the authors provide some visual demonstration as to what is happening here in the loss landscape? Getting a better intuition would help to understand the core method of the paper.\", \"Remark 1.1 seems to be a restatement of Prop 1 unless I am missing something. Could you please explain?\", \"Remark 1.2 can be improved by using more technically accurate statements (i.e. what is meant by \\\"when a reward function slightly changes\\\")? What is meant by the \\\"direct [correspondence] to the changes of loss function in the supervised learning case\\\"? I think the latter is very unclear, and maybe even misleading.\", \"**Experiments:**\", \"My only issue with the experiments (minor) is that you are missing RNAC in Table 2 (why?). Also why not compare against RARL? Missing explanation of the shaded regions in each figure caption.\"], \"questions\": [\"I'm really curious about \\\"flat rewards\\\" in general. Definitions 1 and 2 seem too strict at first glance (the equalities therein), so it is actually a bit surprising to me that they are even possible at all; however IIUC, Fig 6 does give evidence of this. I think that these definitions can be further elaborated on (do you have a toy example where it is easy to see in parameter or action space?) Realistically, what values of $\\\\epsilon$ do you think are reasonable? Something like $10^{-11}$ or $10^{-2}$? (I might've missed it somewhere, sorry.) If these are novel definitions not previously given in the literature, that can be stated as a contribution of the paper. I think it can spark future work in both theory and experimental directions.\", \"Here are some follow up questions/comments:\", \"In Sec 5, how long are those agents trained for? Equal number of env steps for each? How were hparams tuned for each algo?\", \"What is the agent's action scale for these environments (cf L337)? What do you do if the noise added is outside the action range?\", \"Do you have any ideas about the sharp dropoff in Fig 3b for SAM PPO? it looks interesting, but I'm not sure what to make of it... is there some \\\"critical\\\" mass ratio? I.e., if we zoom in, how sharp is that transition, and have you averaged over enough random seeds?\", \"you mention \\\"flatter reward maxima\\\" in L70. I think a formal definition or good visualization of this phenomenon early on would really improve the paper.\", \"How does this work relate at all to other trust region methods like TRPO? How about e.g. [1]\", \"[1]: https://arxiv.org/abs/2103.06257\", \"Typos/minor\", \"Fig 3 caption \\\"nomial\\\"\", \"some missing +/- signs in Table 1 (in parens)\", \"citations in sec 2 often have a missing leading space.\", \"can you improve the visual in Fig 1? I think it's important but not quite capturing the essence. Maybe just to remove axes and grid and zoom in a bit: is there indeed a channel for the agent? It's hard to see\", \"The introduction paragraphs have some grammatical issues. A cleanup/re-write here can help to crystallize the main message early on\", \"With a rewrite to clean up the presentation, deeper explanation for SAM (i)-(iv), and perhaps a few more visualizations, this could be a really strong paper; but unfortunately I don't think it's quite there yet.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"**Weakness 7: Remark 1.1 and Proposition 1**\\n\\n- Proposition 1 establishes a theoretical link between reward surface flatness and robustness to action perturbations in reinforcement learning. Specifically, it states that if a policy is $\\\\mathcal{E}$-flat in the parameter space, then it is $\\\\Delta^*$-robust with respect to action perturbations, where $\\\\delta \\u2264$ $\\\\Delta^*$. This proposition provides formal linking that **flatness in the policy parameters leads to robustness against the perturbations in actions**, as it ensures that perturbations in parameters (and consequently, actions) do not significantly affect the expected reward.\\n- Remark 1.1 serves to highlight the practical significance of the theoretical result. It explains that the SAM optimization method we employ leads to the action robust MDP objective. This provides a clear justification for using SAM in our approach to achieve robustness to action perturbations.\\n- Remark 1.1 extends the theoretical result of Proposition 1 by highlighting its practical application in our approach. It emphasizes that by applying the SAM optimization (Equation 3) to the standard reinforcement learning objective, we effectively obtain the action robust MDP objective (Equation 2). The remark underscores that the optimization process introduced by SAM aligns with the robust optimization framework of action robust MDPs. Specifically, optimizing the SAM objective inherently addresses the robustness to action perturbations as formalized in robust MDP formulations.\\n\\n**Weakness 8: Remark 1.2**\\n\\n- We revised the statements in Remark 1.2 by using technically accurate statements. The revised statements are as follows.\\n - (L303)\\u2018For the changes of reward function\\u2019 \\u2192 \\u2019For reward function perturbations\\u2019\\n - (L304) \\u2018direct [correspondence] to the changes of loss function in the supervised learning case\\u2019 \\u2192 \\u2018it directly corresponds to the perturbations of loss function in the supervised learning case\\u2019\\n - (L305) \\u2018when a reward function slightly changes\\u2019 \\u2192 \\u2018when a reward function has merely slight perturbations\\u2019\\n - (L306)When the MDP\\u2019s transition probability changes \\u2192 When the MDP\\u2019s transition probability has perturbations\\n- **Clarifying the Analogy to Supervised Learning:** **In supervised learning, robustness to changes in the loss function (e.g., from label noise or adversarial examples) is achieved when the model's parameters are in flat regions of the loss landscape. This means that small perturbations in the inputs or outputs lead to minimal changes in the loss. Similarly, in reinforcement learning, robustness to changes in the reward function or transition dynamics can be achieved when the policy parameters lie in flat regions of the expected reward landscape. By promoting flatness in the reward landscape, we make the policy less sensitive to small perturbations in the environment, whether they arise from changes in rewards or transitions.\\n\\n[1] Foret, P., Kleiner, A., Mobahi, H., & Neyshabur, B. (2020). *Sharpness-Aware Minimization for Efficiently Improving Generalization*. International Conference on Learning Representations (ICLR).\\n\\n[2] Izmailov, P., Podoprikhin, D., Garipov, T., Vetrov, D., & Wilson, A. G. (2018). Averaging weights leads to wider optima and better generalization.\\u00a0*arXiv preprint arXiv:1803.05407*.\\n\\n[3] Cha, J., Chun, S., Lee, K., Cho, H. C., Park, S., Lee, Y., & Park, S. (2021). Swad: Domain generalization by seeking flat minima.\\u00a0(Neurips)\\n\\n[4] Zhuang, L., Niu, G., & Sugiyama, M. (2021). *Surrogate Gap Minimization Improves Sharpness-Aware Training* International Conference on Learning Representations (ICLR)\\n\\n[5] Sullivan, R., Terry, J. K., Black, B., & Dickerson, J. P. (2022, June). Cliff Diving: Exploring Reward Surfaces in Reinforcement Learning Environments. (ICML)\\n\\n**Experiments**\\n\\n**Weakness 9: Missing result in Table 2**\\n\\n- We have additionally included the result of RNAC and RARL in the Table 2\\n- The table below shows the performance comparison of agents trained with and without reward noise ($\\u03c3_r$ = 0.1).\\n- SAM+PPO outperforms RNAC and RARL in the reward noisy cases.\\n\\n| Algorithm | HalfCheetah-v3 | | Hopper-v3 | | Walker2d-v3 | |\\n| --- | --- | --- | --- | --- | --- | --- |\\n| | Nominal | Noisy | Nominal | Noisy | Nominal | Noisy |\\n| PPO | 4820 | 3688(\\u22121132) | 3150 | 2945(\\u2212205) | 4780 | 2204(\\u22122576) |\\n| RNAC | 5423 | 4088(\\u22121335) | 3211 | 3035(\\u2212176) | 4184 | 3172(\\u22121012) |\\n| RARL | 5620 | 4617(\\u22121003) | 3124 | 2993(\\u2212131) | 4388 | 3085(\\u22121303) |\\n| SAM+PPO | **6530** | **5990(\\u2212540)** | **3505** | **3377(\\u2212128)** | **5120** | **4226(\\u2212894)** |\"}", "{\"comment\": \"**Question 3: Ideas of sharp performance dropoff in Hopper-v3 of high mass factor (Figure 4b) for SAM+PPO**\\n\\n- Thank you for your insightful observation regarding the sharp drop-off in performance for SAM+PPO in Figure 4b. We have investigated this phenomenon and would like to explain.\\n- The sharp decline in performance at higher mass coefficients, particularly noticeable at a mass coefficient of 1.4 in SAM+PPO(also 1.3 in RARL, and both 1.1 in RNAC, PPO), is due to the physical limitations inherent in the Hopper environment. Hopper is designed with a single leg and relies on precise balance and sufficient torque to propel itself forward and maintain stability.\\n- As the mass coefficient increases, the agent becomes significantly heavier while the actuator limits (maximum torque outputs) remain unchanged. There is a critical mass ratio beyond which the actuators cannot generate enough force to counteract the increased gravitational force on the heavier body. This results in the robot failing to make forward progress or maintain an upright position, leading to episodes terminating prematurely due to falls.\\n- The critical mass ratio is influenced by both the environmental constraints and the algorithm's ability to adapt to those constraints.\\n- We added the performance evaluation of RARL in the main experiment and found that RARL also exhibits the sharp drop-off performance of a high mass coefficient. Up to this point, SAM+PPO and RARL can adapt to the changes in mass by adjusting their control policies, thanks to their robustness mechanisms. However, beyond this critical mass, the task becomes physically infeasible for the agent to perform, regardless of the robustness of the policy.\\n- Also, the performance was averaged over 100 evaluation episodes for each mass coefficient value, providing statistically significant results and confirming the reliability of the observed phenomenon.\\n\\n**Question 4: Definition of flat reward maxima and visualization of the phenomenon**\\n\\n- We defined $\\\\mathcal{E}$-flat reward maxima in Section 4, along with the preliminaries needed to understand the definition. Also, We added the illustrative figures of the definition $\\\\mathcal{E}$-flat reward maxima and $\\\\Delta$-action robustness (referring to Figure. 2 in the revised paper).\\n\\n**Question 5: How does SAM relate to other methods like TRPO**\\n\\n- Both SAM+PPO and TRPO aim to improve policy optimization by controlling the update step to promote stability and robustness. While TRPO enforces a trust region via constraints on the KL divergence, SAM promotes flatness in the loss landscape through parameter perturbations. We have extended our work to integrate SAM with TRPO, creating SAM+TRPO. We have included experimental results comparing SAM+TRPO with standard TRPO and other baselines. The results, presented in Appendix D, show that SAM can enhance TRPO's performance by further promoting robustness.\\n\\n| Perturbation | Metric | HalfCheetah-v3 | | Hopper-v3 | | Walker2d-v3 | |\\n| --- | --- | --- | --- | --- | --- | --- | --- |\\n| | | Nominal | Perturbed | Nominal | Perturbed | Nominal | Perturbed |\\n| Action Noise \\u03c3 = 0.2 | TRPO | 4805 | 1502 (\\u22123303) | 3118 | 1452 (\\u22121666) | 4975 | 603 (\\u22124372) |\\n| | SAM+TRPO | **5502** | **3975 (\\u22121527)** | **3547** | **2313 (\\u22121234)** | **5097** | **2052 (\\u22123045)** |\\n| Mass Scale Factor 1.2 | TRPO | 4837 | 3865 (\\u2212972) | 3215 | 1556 (\\u22121659) | 4957 | 782 (\\u22124175) |\\n| | SAM+TRPO | **5562** | **5210 (\\u2212352)** | **3499** | **3508 (+9)** | **5205** | **5284 (+79)** |\\n| Friction Coefficient 0.88 | TRPO | 4723 | **4774 (+51)** | 3075 | 1580 (\\u22121495) | 4996 | 4756 (\\u2212240) |\\n| | SAM+TRPO | **5562** | 5539 (\\u221223) | **3498** | **2728 (\\u2212770)** | **5073** | **5134 (+61)** |\\n\\n**Question 6: Typos and minor issues**\\n\\n- Thank you for catching typos and We appreciate your attention to detail. We have corrected Figure 4\\u2019s caption, reviewed Table 1 and added the missing \\\"+/-\\\" signs, corrected the citation formatting to include the necessary leading spaces. Also we have thoroughly revised the introduction to address grammatical issues and improve the overall clarity.\\n- We have redesigned Figure 1 by adding the mini-figure that has increased the zoom level, to make key features more visible, illustrating that there indeed is a narrow path for the agent.\\n\\nWe truly appreciate your time for reviewing our work. We hope our response clarifies the raised concerns.\"}", "{\"title\": \"Response to first two comments\", \"comment\": [\"Thanks a lot for addressing my comments in detail. I am taking some time to read and digest your comments to the other reviewers as well. I think your responses and the re-written version of the paper are quite good.\", \"Here are some quick comments about the new Appendix C. Thank you for adding these details, but I still have some confusion:\", \"Eq 28 is confusing - this is a one-shot SL problem, whereas the true PPO loss function is concerned with entire trajectories (as you write in Alg1, p18). It seems like you hint at this mismatch in the next line, \\\"which can be extended to Equation 8 in Section 5, by...\\\" I might be confused but it seems like more than \\\"extending\\\", I think Eq28 is just not valid for the temporal setting in RL (it is equivalent to setting $\\\\gamma=0$).\", \"Also after looking at https://arxiv.org/pdf/1901.09184 it seems to me that they consider the full discounted return, not one step, as you wrote in Eq. 2. Could you please clarify this for me?\", \"You then mention \\\"corresponding proof in Section 4 of the main text\\\", but I don't see any proof there. Can you elaborate or maybe fix the typo?\", \"Missing \\\".\\\" on L902\", \"I appreciate your step-by-step description of the algorithm. It helped me understand my previous concern about choosing the direction in this $\\\\rho$-ball, and now seems like a well-founded idea. (I also should have looked more carefully at the original SAM work: https://arxiv.org/pdf/2010.01412 )\", \"As you mention, based on Eq6 and 7 in App. C; it seems like SAM+X needs 2x as many gradient steps and 2x as many forward passes. Assuming the environment step speed is negligible in comparison, does this mean SAM+X takes roughly 2x as much wall time? Is there a way to estimate either of these values without explicit re-calculation? Can you use the trick used in the original SAM paper? If possible, it could greatly improve the efficiency!\"]}", "{\"comment\": \"**Question 2: Testing on Non-MuJoCo Environments**\\n\\n- We have conducted additional experiments on non-MuJoCo environments provided by Open-AI Gym, including environments with discrete action space, such as CartPole-v1 and LunarLander-v2.\\n- For CartPole-v1 and LunaLander-v2, SAM+PPO shows consistent gains over PPO for the action robustness experiments (we use action noise with $\\\\sigma = 0.2$).\\n\\n| Algorithm | CartPole-v1 | | LunaLander-v2 | |\\n| --- | --- | --- | --- | --- |\\n| | Nominal | Perturbed | Nominal | Perturbed |\\n| PPO | 500 | 464(-36) | 200 | 175(-25) |\\n| SAM+PPO | **500** | **481(-19)** | **200** | **188(-12)** |\\n- As shown in the upper table, we confirm that SAM+PPO consistently outperforms PPO by taking advantage of flat rewards. Also, these results support our claim that the flat reward promotes the robustness of RL even in non-MuJoCo environments.\\n- Also, for the noisy reward cases, SAM+PPO shows the gains (we use reward noise with $\\\\sigma = 0.1$; the noise is added in the training as done in the main experiment).\\n\\n| Algorithm | CartPole-v1 | | LunaLander-v2 | |\\n| --- | --- | --- | --- | --- |\\n| | Nominal | Noisy | Nominal | Noisy |\\n| PPO | 500 | 432(-68) | 200 | 165(-35) |\\n| SAM+PPO | **500** | **458(-42)** | **200** | **182(-18)** |\\n- In our revised manuscript, we have added these results in Appendix D.1.\\n\\nWe truly appreciate your careful and thoughtful reviews. We hope our response clarifies your concerns.\"}", "{\"title\": \"thank you\", \"comment\": \"I would like to thank the authors for the very thorough and detailed reply. After going through it and reading other reviewers discussions, I\\u2019m happy to recommend acceptance.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Oral)\"}", "{\"title\": \"Proposition 1\", \"comment\": \"After reading the other reviews and seeing that Proposition 1 (and its proof) have changed a bit - it raises the following concern:\\n\\nYou've provided only an upper bound on $\\\\Delta^*$, meaning that in principle $\\\\Delta^* \\\\to 0$ is possible. This would make the following Remark vacuous! If $\\\\Delta^* \\\\to 0$, then the resulting policy is *not* action robust.\\n\\nThus, it seems like instead of an upper bound, we really need a lower bound on $\\\\Delta^*$. Given (A) the limited amount of time, (B) I did not bring this up in my original review and (C) I'm viewing the paper as mostly experimental at this point; I would conclude that this is not entirely detrimental. However, it is a bit concerning from a theoretical standpoint. \\n\\nIf you can provide some hints on how to proceed for future work or reduce the theoretical claims (esp. in connection with my previous concerns about reward vs trajectory return), I think that would be the most appropriate way to proceed. \\n\\nI look forward to hearing back from the authors!\"}", "{\"title\": \"Thanks\", \"comment\": \"Thanks for the very thorough reply. I\\u2019m happy to recommend acceptance.\"}", "{\"title\": \"Official Comment by Authors\", \"comment\": [\"We thank the reviewer for their thoughtful feedback and valuable suggestions. As detailed below, we have addressed the weaknesses and questions raised.\", \"**Weakness 1 and Questions 2, 3: Mixed evaluation results**\", \"**Questions of \\u201cWhy in \\\"Nominal\\\" SAM+PPO still has a higher reward, e.g. Table 1+2, Fig. 4,5. Similarly, experiment in 5.2, why when action noise is small, i.e. even equal to 0, SAM+PPO still performs better than the others, because the objectives of PPO and SAM+PPO would converge to the same one?\\u201d:** To the best of our understanding, the issue is about the better performance of SAM+PPO over PPO, even in a nominal case without perturbations. To answer this, we want to refer to the fact that SAM is also effective in improving the performance of models even without train-test discrepancy. For example, the original paper of SAM [1] and the related works [2] (Stochastic Weight Averaging (SWA); another approach to find flatter minima) widely validate that the flatter minima improve the model performance in popular benchmarks, including CIFAR-10, CIFAR-100, Flower, ImageNet, etc. It is because the train and test splits are commonly separated from each other, so the deep models need to be well-generalized to test splits, which is distinct from the train splits. For image classifications, the train and test images are different even with the same categories. For reinforcement learning, we separately generate different test episodes even with the same configurations of environments. Due to the gap between train and test splits, deep models commonly suffer from performance degradation in testing. **In this point of view, flatter minima via SAM is widely accepted to improve the generalization performance in testing. That is why SAM+PPO outperforms PPO in some nominal cases.** Also, even in training with the nominal case, **the minima found by SAM+PPO are surely different from the minima by PPO**. We want to remind you that SAM+PPO considers the flatness of loss surfaces, but PPO does not. Therefore, PPO can find sharper minima, but SAM+PPO is forced to find flatter minima.\", \"[1] Foret, P., Kleiner, A., Mobahi, H., & Neyshabur, B. (2020). *Sharpness-Aware Minimization for Efficiently Improving Generalization*. International Conference on Learning Representations (ICLR).\", \"[2] Izmailov, P., Podoprikhin, D., Garipov, T., Vetrov, D., & Wilson, A. G. (2018). Averaging weights leads to wider optima and better generalization.\\u00a0*arXiv preprint arXiv:1803.05407*.\", \"**Further explanation on high friction cases:** As the reviewer acknowledged, SAM+PPO shows clear performance gains in the joint variations of friction and mass. However, as pointed out, in the Walker2d-v3 case, SAM+PPO outperforms others in low frictions but not in high frictions (quite mixed results).\", \"It is challenging to fully understand the empirical results, particularly in specific settings, i.e., the high friction regime. However, we have tried our best to explain the results as follows:\", \"First of all, compared with HalfCheetah-v3 and Hopper-v3, the Walker2d-v3 environment is particularly sensitive to changes in friction due to the complex dynamics of balancing and coordinating two legs.\", \"Second, the increasing friction from low to high can be understood as the dramatic changes in environments. The agent of Walker2d-v3 has its center of mass at the upper side of its body compared with HalfCheetah-v3 and Hopper-v3. Thus, the high friction makes the agent easily fall over.\", \"Third, SAM does not directly perturb actions but indirectly perturbs actions via perturbing the policy parameters. So, SAM can experience as various as actions that might be found by perturbing policy parameters. In contrast, other action-robust RL like RARL explicitly perturbs the actions to experience the worst case.\", \"Based on the aspects above, we conjecture that SAM+PPO focuses less on experiencing high friction cases, which are dramatic changes in environments.\", \"However, we want to point out that SAM+PPO can widely experience actions, transitional probability, and reward shifting by \\u201cindirectly\\u201d perturbing the policy parameters. As empirical evidence, SAM+PPO outperforms PPO in the various factors of perturbations, but RNAC or RARL shows comparably limited gains over PPO (referring to Figure 6 and Table 1 in the main paper).\", \"To our understanding, we here provide the following additional results and discussions for verifying the benefits of SAM+PPO. In brief, we have i) additional comparison with a robust RL baseline called RARL for emphasizing the performance gains of SAM+PPO, ii) SAM with other policy-based methods, i.e., TRPO, for highlighting the applicability of SAM to other policy-based methods, iii) SAM+PPO in other environments, for confirming wide applicability of SAM+PPO in various settings, iv) Understanding how SAM+PPO enhances the robustness, by describing how SAM achieves better robustness across various factors than other related works.\"]}", "{\"title\": \"minor comments\", \"comment\": \"I would highly suggest combining Table 1 with Table 7 to get a better picture. It is also good to show that adding SAM to TRPO improves its performance, which I believe deserves attention in the main text.\\n\\nCan you explain the bolding in Table 1? I'm confused about the choice for Hopper (Action Noise).\\n\\nAlso, Fig 1 is looking better now. Purely for aesthetics, removing the axis ticks and labels, and maybe even combining both into one bigger figure can help even more - just my opinion.\\n\\nAlso, thanks for the additional illustrations, it helps give a better quick intuition.\"}" ] }
4O0v4s3IzY
On the self-verification limitations of large language models on reasoning and planning tasks
[ "Kaya Stechly", "Karthik Valmeekam", "Subbarao Kambhampati" ]
There has been considerable divergence of opinion on the reasoning abilities of Large Language Models (LLMs). While the initial optimism that reasoning might emerge automatically with scale has been tempered thanks to a slew of counterexamples--ranging from multiplication to simple planning--there persists a wide spread belief that LLMs can self-critique and improve their own solutions in an iterative fashion. This belief seemingly rests on the assumption that verification of correctness should be easier than generation--a rather classical argument from computational complexity--which should be irrelevant to LLMs to the extent that what they are doing is approximate retrieval. In this paper, we set out to systematically investigate the effectiveness of iterative prompting in the context of reasoning and planning. We present a principled empirical study of the performance of GPT-4 in three domains: Game of 24, Graph Coloring, and STRIPS planning. We experiment both with the model critiquing its own answers and with an external correct reasoner verifying proposed solutions. In each case, we analyze whether the content of criticisms actually affects bottom line performance, and whether we can ablate elements of the augmented system without losing performance. We observe significant performance collapse with self-critique and significant performance gains with sound external verification. We also note that merely re-prompting with a sound verifier maintains most of the benefits of more involved setups.
[ "Large Language Models", "Reasoning", "Planning", "Self-Critique", "Verification" ]
Accept (Poster)
https://openreview.net/pdf?id=4O0v4s3IzY
https://openreview.net/forum?id=4O0v4s3IzY
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yblzn2VuSB", "yTw5LEuKtO", "vwRenDRUvP", "ucGEciyCCy", "qyvZcICWID", "poIRFaCKt3", "dxKJW2BGu1", "doVaSoi2Z2", "cOxUsPOAnJ", "YzwgZotadF", "YS9qECJZAR", "RlnOYit02U", "QcFShF7Q0P", "ITYxyCFQ03", "8BEWJvQIub" ], "note_type": [ "official_review", "official_comment", "official_review", "official_comment", "decision", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1730719375469, 1732385315551, 1729160280558, 1732241973445, 1737523902341, 1732241327972, 1730649768483, 1732880848188, 1730571153074, 1732242121568, 1734993717237, 1732242219592, 1732242502766, 1732241367746, 1732748398142 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission8343/Reviewer_Dj2B" ], [ "ICLR.cc/2025/Conference/Submission8343/Reviewer_NPSN" ], [ "ICLR.cc/2025/Conference/Submission8343/Reviewer_jfRh" ], [ "ICLR.cc/2025/Conference/Submission8343/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission8343/Authors" ], [ "ICLR.cc/2025/Conference/Submission8343/Reviewer_R3Bw" ], [ "ICLR.cc/2025/Conference/Submission8343/Reviewer_jfRh" ], [ "ICLR.cc/2025/Conference/Submission8343/Reviewer_NPSN" ], [ "ICLR.cc/2025/Conference/Submission8343/Authors" ], [ "ICLR.cc/2025/Conference/Submission8343/Area_Chair_9GKN" ], [ "ICLR.cc/2025/Conference/Submission8343/Authors" ], [ "ICLR.cc/2025/Conference/Submission8343/Authors" ], [ "ICLR.cc/2025/Conference/Submission8343/Authors" ], [ "ICLR.cc/2025/Conference/Submission8343/Authors" ] ], "structured_content_str": [ "{\"summary\": \"The paper evaluates the self-verification abilities of LLMs in reasoning and planning tasks using iterative prompting and critiquing. It contrasts the performance of LLMs self-verifying their solutions against external sound verifiers across three domains: Game of 24, Graph Coloring, and STRIPS planning. Findings indicate that LLMs underperform in self-verification and that external sound verification significantly improves the accuracy of solutions. The study suggests the ineffectiveness of self-critique mechanisms in LLMs and recommends integrating external verifiers for better performance in reasoning tasks.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper addresses a novel aspect of LLMs\\u2014self-critique and iterative verification\\u2014that is underexplored in the existing literature. It challenges the assumption that LLMs can effectively self-critique by demonstrating that external verification offers more reliable improvements.\\n2. The experimental setup is clearly described, allowing for reproducibility and understanding of how iterative prompting affects LLM performance in reasoning tasks. The paper methodically outlines its methodology and the rationale behind using specific domains for testing (Sections 4 and 3). \\n3. The findings significantly contribute to understanding LLM limitations in self-verification tasks, which is critical for deploying these models in real-world applications where accuracy and reliability are paramount. \\n4. The study is well-structured and robustly empirically analyzed, providing a comparative assessment of LLMs with and without external sound verifiers.\", \"weaknesses\": \"1. The paper\\u2019s focus on only three specific domains might limit the generalizability of the findings. While these domains are relevant, more varied tasks could provide a broader understanding of LLM capabilities across different reasoning types (Section 3).\\n2. The analysis of the self-critique mechanism lacks depth regarding why LLMs fail at self-critique. Specific instances of LLM outputs and their failures would enrich the discussion by pinpointing the flaws in LLM reasoning strategies (Section 5.1). \\n3. There is no detailed discussion on the computational cost and efficiency of using external verifiers versus self-verification. This information would be crucial for practical implementations where resource constraints are a significant consideration. \\n4. The paper does not thoroughly explore the theoretical implications of its findings on the computational complexity theories surrounding LLMs and self-verification. A deeper theoretical analysis could provide insights into the fundamental limitations of LLM architectures (Section 2).\", \"questions\": \"1. How do the authors anticipate their findings will generalize to other complex reasoning tasks not covered in the study? Can the observed ineffectiveness of self-critique mechanisms be extrapolated to different types of LLMs or reasoning models?\\n2. Could the authors elaborate on the choice of domains for the study? Why were these specific domains chosen, and how do they represent the broader spectrum of reasoning tasks applicable to LLMs? \\n3. What additional mechanisms or modifications do the authors suggest could potentially improve the self-verification capabilities of LLMs? Is there ongoing work to develop more effective internal critique mechanisms within LLMs? \\n4. How do the authors envision the impact of their findings on the future development and deployment of LLMs in safety-critical applications? What precautions or additional measures would they recommend based on their study\\u2019s outcomes?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you to the authors for their reply and extra experiments. Thank you for correcting my misunderstanding. Most of my concerns are resolved. I concur with the point that the paper is challenging and the conclusion that in many scenarios where verification is easier than generation, self-verification actually can't bootstrap the performance. I would like to offer a raise of the score. I still have some further questions to ask and discuss.\\n\\n1. Task difficulty: I understand the authors selected tasks whose testing accuracy ranges from 4% to 40%, but according to the LLM+LLM results in Table 1, the only outlier is Blocksworld, whose LLM+LLM accuracy is larger than the S.P. accuracy. It also has the highest baseline accuracy (40%), while the others are significantly lower. The experiment would be more comprehensive if, in future studies not now, there are several tasks whose S.P. accuracy is around 50% and several around 80%.\\n\\n2. Could the authors discuss more about why only Blocksworld is the outlier? Are there any reasons behind the numbers that make Blocksworld different from the other 3 tasks and lead to the difference in the performance of LLM+LLM? Insights into what kinds of tasks are feasible for self-critique would be valuable.\\n\\n3. I concur that the selected tasks are easier to verify than to generate based on complexity theory. A minor question is, what kinds of tasks are harder to verify? Could the authors provide some examples?\"}", "{\"summary\": \"This paper mainly evaluates the self-critique and verification capabilities of LLMs in comparison to sound verifiers. It concludes that the self-verification loop conducted by LLMs is not as helpful as previous works claimed. On the contrary, it might hurt LLM performance in some planning domains.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"This paper has the following strengths.\", \"originality\": \"This paper systematically evaluates LLMs' self-verification capability by careful ablation and comparison to symbolic processors. It finds that LLM verification is not very helpful in some planning cases. This is novel and interesting.\", \"quality\": \"This paper considers problems of memorization and lack of ground truth during evaluation, which are important concerns.\", \"clarity\": \"The experiment setup is quite clear.\", \"significance\": \"Knowing what LLMs are actually doing when people claim they can do a lot of things is very important.\", \"weaknesses\": \"1. Is it fair to compare LLMs with oracle verifiers? The finding that LLM self-critique can sometimes downgrade their generation performance is interesting, but oracle processors are not always accessible in all tasks (as the authors mentioned in their paper, some tasks such as creative writing do not have a ground truth answer). I'm not surprised that oracle verifier is improving LLM performance, but I wonder if it is possible that LLM can serve as a decent alternative when sound verifier is absent in general areas?\\n\\n2. The task domains are constrained while the general conclusion is very strong. The authors evaluated three main planning tasks and one LLM (GPT-4) with four datasets, with no clear objective as to why the conclusion drawn from these three specific tasks can be generalized. It is unclear why these tasks and datasets can represent general LLMs' lack of self-verification capability in a wide ranges of tasks.\\n\\n3. Some other details in this paper are also overclaimed, e.g., on page 2, the authors claim \\\"...the state-of-the-art GPT-4\\\" while GPT-4 is not SotA in many benchmarks anymore (in [1], inter alia).\\n\\n4. There are quite some typos. Writing needs to be done with more care.\\n\\n[1] Dubey, A., Jauhri, A., Pandey, A., Kadian, A., Al-Dahle, A., Letman, A., ... & Ganapathy, R. (2024). The llama 3 herd of models.\\u00a0arXiv preprint arXiv:2407.21783.\", \"questions\": \"I don't have more questions but authors are encouraged to address my concerns in the weakness section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for the encouraging review; we are gratified that you see the importance of the contribution.\\n\\nRegarding your question about the generality of the experiments, and whether we think they will stand the test of time on other LLMs: We believe that all autoregressive LLMs trained via teacher forcing\\u2013which is basically all the LLMs since GPT2\\u2013will have fundamental limitations on reasoning tasks (a fact that is being appreciated more widely thanks to several critical studies). In the end, verification is a reasoning task as much as generation is\\u2013and thus we are confident that our experimental results will hold on all such LLMs. \\n\\nWhile we reported all our experiments on GPT4, our partial empirical studies with other LLMs (as shown in the table below), including GPT-4o, have largely been in line with the results we reported on GPT4. Nevertheless, we are currently in the process of replicating the experiments on LLaMA, and hope to have the results incorporated before the end of the discussion period, but certainly for the camera-ready version. \\n\\n| Model | Domain | S.P | LLM+LLM |\\n|-------------|-----------------------|-----|---------|\\n| GPT-4o | Graph coloring | 9% | 0% |\\n| GPT-4o-mini | Graph Coloring | 0% | 0% |\\n| GPT-4o-mini | Graph Coloring (Easy) | 30% | 0% |\\n| GPT-4o-mini | Blocksworld | 29% | 4% |\\n| GPT-4o-mini | Mystery Blocksworld | 1% | 0% |\\n\\n\\n> **The use of \\\\citep and \\\\citet is not correct. Also there are some small spelling/typesetting issues here there that should be easily fixable.**\\n\\nWe\\u2019ve fixed the citet/citep issue, as well as updating the typesetting and fixing a few commas in the revised version. Thank you for pointing this out!\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"comment\": \"We thank reviewer Dj2B for their valuable comments. We are gratified that the reviewer found our work to be novel, significant and well-structured. Here are some clarifications for the concerns raised.\\n\\n**Choice of domains and applicability of results:**\", \"we_would_like_to_direct_the_reviewer_to_lines_139_174_and_in_particular_to_the_following\": \"> *In the current work we address this by restricting our focus to fully specified, formally verifiable problems which can be solved directly by deductive methods. Though these may at first seem like a very narrow class, especially when compared to the cornucopia of commonsense, language-based, and domain-specific benchmarks in the literature, we argue that they are fundamental, as any other reasoning tasks must include components that test these same capabilities\\u2013otherwise they are merely testing recall.*\\n\\nFurthermore, we specifically chose domains on which the classical argument mentioned in line 53 holds. To clarify what we mean by this: Our domains all have the property that it is easier to check potential answers than it is to generate correct ones. Formalizing this in terms of complexity theory, generation is in a higher complexity class than verification. This holds true for all our tasks, over which verification is polynomial, but generation is NP-complete or even PSPACE-complete! [1,2]\\n\\nWhat should be surprising here is that, despite this gap in the computational complexity of verification versus generation, using the LLM as a verifier is just as brittle and inaccurate as using it just to guess solutions. \\n\\nIn other words, previous claims that LLMs can bootstrap themselves on arbitrary problems are misleading and not robust, even when the playing field is skewed in their favor. Even worse, we've shown that the LLM self-verification loop can in some cases significantly decrease performance.\\n\\nFinally, we expect our conclusions to be applicable to many overall ambiguous and complex tasks, because partial verifiers can be constructed and used together. Most real-world tasks contain a mixture of formally verifiable and tacit aspects. For example, see the TravelPlanning benchmark [3] which combines \\u201chard\\u201d and \\u201csoft\\u201d critics of LLM-generated plans. Or consider linters, unit tests, etc in software development. \\n\\n\\n\\n**What additional mechanisms or modifications do the authors suggest could potentially improve the self-verification capabilities of LLMs? Is there ongoing work to develop more effective internal critique mechanisms within LLMs?**\\n\\nIn our view, verification/critique mechanisms can provide three things: a (potentially inaccurate) binary signal about correctness, feedback that hopefully elicits the correct answer, and guarantees over the output of the complete system. While the first two can be improved either with customized pre-training/fine tuning that is most relevant to the task, they would remain brittle and sensitive to minor distributional shifts despite them being semantics-preserving. The third\\u2013giving guarantees on verification status\\u2013will be out of reach for pre-trained systems. \\n\\nWhile LLMs may improve on the first points over time, especially within domains that are well-represented in synthetic training data geared towards verification, their nature as uninterpretable black boxes precludes them from fulfilling the final role. Even when we intuitively think we have a grasp on how they fare on some problem class, they may in fact just be learning brittle distributional features that eventually fail in novel situations [4].\\n\\n**How do the authors envision the impact of their findings on the future development and deployment of LLMs in safety-critical applications? What precautions or additional measures would they recommend based on their study\\u2019s outcomes?**\\n\\nThe self-critiquing limitations of LLMs are of particular danger in safety-critical applications\\u2013where the system might send out a potentially incorrect solution for deployment. \\nAs we discussed in the conclusion and introduction, we believe that verification should be offloaded to sound systems when feasible, and LLMs should not be trusted in their current form in other scenarios without significant oversight. \\n\\nMost complex tasks contain a mixture of explicitly verifiable and tacit aspects. We believe that partial verifiers can be constructed and used together for such problems [5]. Any sort of LLM verification must be tested thoroughly as there are cases like the ones we\\u2019ve shown where they actually worsen performance.\"}", "{\"summary\": \"This is an experimental paper studying the the popular technique of self-verification of LLMs for enhancing apparent reasoning capabilities. The stringent experimental protocol shows that the self-verification does not really work and shows that other techniques like using formal (symbolic) verifiers are better suited to achieve automated reasoning with LLMs.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1) The paper is really well written and easy to to follow. The non-technical nature might have helped here.\\n2) I enjoyed reading the (extensive) related work section where problems in prior work are made explicit.\\n3) All claims made in the paper are substantiated by appropriate experiments.\", \"weaknesses\": \"1) For someone familiar with the field the findings might be a little bit obvious. However, given the amount of papers being published using self-verification, I nevertheless believe this to be an important study when it comes to combating the delusion of self-verification.\\n\\n2) Given that this is an experimental study it might be good to also point out weaknesses in the experimental protocol. Specifically, making explicit all the assumptions that were made and what might change if they were not to hold. Concretely, what gives the authors the confidence that their findings have a high probability of standing the test of time.\", \"questions\": \"See point 2) in weaknesses and more specifically: are there any arguments that the findings do not only hold for the examined LLMs but in general for transformer-based auto-regressive models, or even other models like discrete diffusion models.\", \"minor_comment\": \"The use of \\\\citep and \\\\citet is not correct. Also there are some small spelling/typesetting issues here there that should be easily fixable.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"NA\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thanks the authors for the clarifications. I have raised my score.\"}", "{\"summary\": \"This paper discusses the effect of self-verification in LLM reasoning and planning, challenging the assumption that verifying correctness is easier than generating solutions. Through experiments in three domains (Game of 24, Graph Coloring, and STRIPS planning), it finds that self-verification does not improve accuracy and can even reduce performance. In contrast, a \\\"sound external verifier\\\" significantly enhances accuracy, with simpler re-prompting methods maintaining most of these benefits.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The presentation and writing are clear.\\n2. The authors made adjustments to the test datasets to ensure their validity. For example, they generated new instances to prevent test set memorization and modified the task solution format to better evaluate the self-verification pipeline.\\n3. The systematic analysis of self-verification provides insights into the cases where the verification module can help improve LLM performance.\", \"weaknesses\": \"1. The experiment design is not entirely persuasive.\\nThe authors attempt to challenge the assumption that verifying correctness is easier than generating solutions, suggesting that self-verification does not improve performance. However, task difficulty should be considered. Apart from Blocksworld, the accuracy in other tasks is quite low. According to Table 2, \\\"LLM Verification Results,\\\" verification performance varies across tasks, especially in terms of the False Negative Rate (FNR). In Blocksworld, which has the lowest FNR, self-verification actually improves task accuracy from 40% to 55%. This suggests there are cases where verification is easier than generation and where self-verification contributes positively to task accuracy. More tasks should be added for more comprehensive results.\\n\\n2. The authors use a \\u201csound\\u201d verifier as a comparison, but it\\u2019s unsurprising that a ground-truth verifier significantly improves performance. With ground-truth verification, the model can eliminate incorrect answers and focus on sampling correct ones, or at least provide ground-truth information. Taking the first point into account, a more nuanced conclusion could be that in tasks where verification is easier than generation, self-verification helps; otherwise, it has no benefit or even harms performance. The improvement limit for self-verification is thus bounded by the effectiveness of the \\u201csound\\u201d verifier. \\n\\n3. The exact GPT version should be specified, and more models, particularly advanced ones, should be tested for comprehensive results.\", \"questions\": \"Please discuss the points mentioned in the weakness part.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your thoughtful comments and especially the recognition that our experimental study sheds considerable light on if and when self-verification can be helpful.\\n\\nUnfortunately however, the rest of the review suffers from two serious misunderstandings of the paper\\u2013that we bring up and clarify below.\", \"in_weakness_1_you_say\": \"> **The authors attempt to challenge the assumption that verifying correctness is easier than generating solutions, suggesting that self-verification does not improve performance**\\n\\n\\nWe are afraid that this interpretation is off the mark. Our paper is not concerned with the question of whether or not verification is easier than generation. We specifically choose tasks on which we know that verification is easier than generation. Formalizing this in terms of complexity theory, across all these tasks, generation is in a higher complexity class than verification. Proposed solutions can be verified in polynomial time, but generating these solutions is NP-complete (graph coloring) or even PSPACE-complete (planning).\\n\\nWhat we challenge instead is the prevailing wisdom that LLMs are somehow sensitive to the lower computational complexity of verification. We argue that there is no reason to believe that LLMs will be better at verification than generation. Specifically, we challenge the claim that in these scenarios LLMs can verify proposed solutions better than they can generate them, and thus robustly bootstrap their performance to higher levels.\\n\\nOur results indeed validate our belief/hypothesis, and show that\\u2014in fact\\u2014the gains seen in previous studies may be worse than illusory: in many cases, we see performance worsen with LLM self-critiques. \\n\\nContinuing on, the reviewer says\\n\\n\\n> **more nuanced conclusion could be that in tasks where verification is easier than generation, self-verification helps**\\n\\n\\nAs you can see from the clarification above, the paper in fact showcases the opposite of the reviewer\\u2019s proposed conclusion!\\n\\n\\nAbout the concern that we are testing on benchmarks where the LLM accuracy is not very high to begin with\\n\\n> **However, task difficulty should be considered. Apart from Blocksworld, the accuracy in other tasks is quite low.** \\n\\nWe would argue that in practice, self-verification is most useful when base performance is low, which reinforces the validity of our testing on tasks that range from 4 to 40% standard prompting accuracy. Some of the previous studies have focused on tasks on which LLM performance is already very high, sometimes as high as 98%--which makes the whole evaluation of effectiveness of self-verification a moot point. \\n\\nWe specifically chose a set of tasks where the initial accuracy of LLMs is low to begin with, in order to gain more significant resolution in our analysis of how much the self-verification scheme does or does not improve. Note that,\\n\\n\\n> **With ground-truth verification, the model can eliminate incorrect answers and focus on sampling correct ones, or at least provide ground-truth information.**\\n\\nOur paper also addresses this point. As we discuss, ground-truth verification is providing two things: 1) an external system that decides whether an answer is correct and thus outputs it only if it is guaranteed and 2) some feedback to the LLM in the form of a backprompt. The claim that the model is itself eliminating incorrect answers is also tested\\u2014we compare ground truth verification with all, some, or even zero critique given in section 5.2. We find that, in fact, we can gain most of the performance improvement without providing anything about the previous wrong answers to the model! This indicates that the performance improvements seen are mainly from rejection sampling the model, rather than from any inherent LLM ability to \\u201cfocus on sampling correct ones\\u201d when given feedback.\"}", "{\"metareview\": \"In this work, the authors empirically evaluate the self-critique/self-verification methods for LLMs, which have the promise to improve LLMs' own solutions. A very interesting finding is that self-verification doesn't work well, but performance improves with sound external verification.\\n\\nThe reviewers found the paper well-written, novel and significant. However there are some over-claimed statements, and the scope is relatively limited to support the reliability of the conclusion. Overall the strengths outweigh the weaknesses. All reviewers consider this work above the acceptance threshold.\\n\\nI recommend acceptance.\", \"additional_comments_on_reviewer_discussion\": \"The reviewers found the paper well-written, novel and significant. However there are some over-claimed statements, and the scope is relatively limited to support the reliability of the conclusion. Overall the strengths outweigh the weaknesses. All reviewers consider this work above the acceptance threshold.\"}", "{\"comment\": \"> **The improvement limit for self-verification is thus bounded by the effectiveness of the \\u201csound\\u201d verifier.**\\n\\nA sound verifier will, by definition, perfectly discriminate between correct and incorrect answers. The term \\u201csound\\u201d here is borrowed from mathematical logic, where it essentially means that a system preserves truth. It is one of a desirable pair of properties for a system that together guarantee it will output all and only correct answers. The other property is completeness, which, in this case, would apply to the LLMs (are they capable of guessing all plausible solution candidates?). In general, such completeness is not guaranteed (as we discuss, approaches like ToT can be seen as externally inducing LLMs to generate a diverse set of candidate solutions through prompt diversification). \\n\\nCompleteness would require that, for any problem of interest, the LLM would eventually output the correct answer (even if this answer were low probability and thus unlikely). However, even when we sample 150 answers per prompt in the case of Game of 24 (mentioned in A.2), the LLM only approaches 70% accuracy, and this accuracy asymptotes sharply. In short, we argue that it is not the effectiveness of the verifier that limits the improvement, but the incompleteness of the LLM. \\n\\nFinally, the issue of availability of sound verifiers is not as insurmountable as the reviewer seems to think it is. Most LLM verification systems are indeed tested on tasks where they already have external sound solvers/verifiers to provide synthetic data with correct verification labels. \\n\\nFurthermore, there are works in the literature (e.g. [1]) that show how verifiers can be teased out from the LLMs themselves with partial help of human or automated critics. Another idea is to learn verifiers from synthetic labeled data and use them to verify solutions. \\n\\nThe point of our work is to show that LLMs, out of the box, are no better at verification than generation. \\n\\n[1] Guan, Lin, Karthik Valmeekam, Sarath Sreedharan, and Subbarao Kambhampati. \\\"Leveraging pre-trained large language models to construct and utilize world models for model-based task planning.\\\" Advances in Neural Information Processing Systems 36 (2023): 79081-79094. \\n\\n[2] Zhang, Lunjun, Arian Hosseini, Hritik Bansal, Mehran Kazemi, Aviral Kumar, and Rishabh Agarwal. \\\"Generative verifiers: Reward modeling as next-token prediction.\\\" arXiv preprint arXiv:2408.15240 (2024).\\n\\n> **The exact GPT version should be specified and more models**\\n\\nThank you for pointing this out. The exact model snapshot was GPT-4-0613. We have made this explicit in the revised version of the paper. While we reported all our experiments on GPT4, our partial empirical studies with other LLMs (as shown in the table below), including GPT-4o, have largely been in line with the results we reported on GPT4. Nevertheless, we are currently in the process of replicating the experiments on LLaMA, and hope to have the results incorporated before the end of the discussion period, but certainly for the camera-ready version. \\n\\n| Model | Domain | S.P | LLM+LLM |\\n|-------------|-----------------------|-----|---------|\\n| GPT-4o | Graph coloring | 9% | 0% |\\n| GPT-4o-mini | Graph Coloring | 0% | 0% |\\n| GPT-4o-mini | Graph Coloring (Easy) | 30% | 0% |\\n| GPT-4o-mini | Blocksworld | 29% | 4% |\\n| GPT-4o-mini | Mystery Blocksworld | 1% | 0% |\"}", "{\"comment\": \"We sincerely appreciate reviewer jfRh for their feedback. Below, we provide clarifications in response to the major concerns raised in the review.\\n\\n**Choice of domains:**\\n\\nWe would like to first point out that our work\\u2019s primary aim is to caution the community on using LLM self-verification methods for reasoning tasks. As mentioned in our related work section, we believe that reasoning is a fraught term. In previous work it has been unclear as to what definition of reasoning the works presuppose when making claims about LLM reasoning capabilities. Due to these shifting definitions and implicit assumptions, making concrete claims or pinning them down becomes a lot more difficult. This is why we restrict ourselves to focus on fully specified, verifiable problems that can be solved by deductive methods and have oracle verifiers. This allows us to check the quality of both binary verification and critique generated by the LLM. Furthermore, the algorithmic abilities these domains test are fundamental\\u2014any other reasoning task must include components that test these same capabilities, or else be only a retrieval task. We have made this explicit in the introduction section of the revised version of the paper.\\n\\nOur results show that using LLMs as verifiers and within a self-verification loop is harmful in general and much more care should be taken when deploying LLMs in such a loop, specifically in reasoning tasks. Per our conclusion, we expect our results to be applicable to many overall ambiguous and complex tasks, because partial verifiers can be constructed and used together. Most real-world tasks contain a mixture of formally verifiable and tacit aspects. For example, see the TravelPlanning benchmark which combines \\u201chard\\u201d and \\u201csoft\\u201d critics of LLM-generated plans. Or consider linters, unit tests, etc in software development.\\n\\nWhile we reported all our experiments on GPT4, our partial empirical studies with other LLMs (as shown in the table below), including GPT-4o, have largely been in line with the results we reported on GPT4. Nevertheless, we are currently in the process of replicating the experiments on LLaMA, and hope to have the results incorporated before the end of the discussion period, but certainly for the camera-ready version. \\n\\n| Model | Domain | S.P | LLM+LLM |\\n|-------------|-----------------------|-----|---------|\\n| GPT-4o | Graph coloring | 9% | 0% |\\n| GPT-4o-mini | Graph Coloring | 0% | 0% |\\n| GPT-4o-mini | Graph Coloring (Easy) | 30% | 0% |\\n| GPT-4o-mini | Blocksworld | 29% | 4% |\\n| GPT-4o-mini | Mystery Blocksworld | 1% | 0% |\\n\\n**Oracle verifiers**\\n\\nThe oracle case is useful because it allows us to carefully and systematically ablate the self-verification system. Note that there are two components to the oracle case: the sound verification which is an external system that ensures that any output is guaranteed correct, and the sound feedback which can be passed back to the LLM in part or in whole. In section 5.2, we use this distinction to test how much of the increased improvement is due to the LLM itself conditioning on both the claim that the answer is incorrect and the feedback passed back. What we find is that it doesn\\u2019t seem to matter much. When we remove the feedback entirely, we retain most of the performance improvement. This further shows that previous claims that LLMs improve because they effectively take in feedback are misleading.\"}", "{\"comment\": \"**Analysis of self-critique mechanism:**\\n\\nDue to page limit restrictions, we have moved our examples of specific instances and the kinds of errors that verifier LLMs make to the Appendix. We have provided a more in-depth comparison of GPT-4\\u2019s critique and verification abilities across the three domains in Appendices A3, A4 and A5. We have also provided examples of the common kinds of errors that GPT-4 makes in generating critiques. These include hallucinations about the structure of the problem (e.g. incorrectly representing preconditions, incorrectly stating vertices are connected when they aren\\u2019t), mismatches between calculations and final answer (e.g. G24 instances in which the verifying LLM states that the proposed expression simplifies to 24, but nevertheless says it is incorrect), and calculation errors (including incorrectly updating the state in planning domains).\\n\\n[1] Bylander, Tom. \\\"The computational complexity of propositional STRIPS planning.\\\" Artificial Intelligence 69, no. 1-2 (1994): 165-204.\\n\\n[2] Aho, Alfred V., John E. Hopcroft and Jeffrey D. Ullman. \\u201cThe Design and Analysis of Computer Algorithms.\\u201d (1974).\\n\\n[3] Xie, Jian, Kai Zhang, Jiangjie Chen, Tinghui Zhu, Renze Lou, Yuandong Tian, Yanghua Xiao, and Yu Su. \\\"TravelPlanner: A Benchmark for Real-World Planning with Language Agents.\\\" In Forty-first International Conference on Machine Learning.\\n\\n[4] Zhang, Honghua, Liunian Harold Li, Tao Meng, Kai-Wei Chang, and Guy Van Den Broeck. \\\"On the paradox of learning to reason from data.\\\" In Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence, pp. 3365-3373. 2023.\\n\\n[5] Kambhampati, Subbarao, Karthik Valmeekam, Lin Guan, Mudit Verma, Kaya Stechly, Siddhant Bhambri, Lucas Paul Saldyt, and Anil B. Murthy. \\\"Position: LLMs Can\\u2019t Plan, But Can Help Planning in LLM-Modulo Frameworks.\\\" In Forty-first International Conference on Machine Learning.\"}", "{\"comment\": \"We thank the reviewer for their prompt response. Below we provide further clarifications to your questions.\\n\\n> **Could the authors discuss more about why only Blocksworld is the outlier? Are there any reasons behind the numbers that make Blocksworld different from the other 3 tasks and lead to the difference in the performance of LLM+LLM? Insights into what kinds of tasks are feasible for self-critique would be valuable.**\", \"there_are_a_few_things_to_note_here\": \"While Blocksworld is an outlier, this is likely due to it being a famous, well-represented domain in the pre-training data that has been studied since the 90s. However, this performance increase is much more brittle than it looks: Mystery Blocksworld is precisely a masked version of the exact same domain (we even use identical problems), and yet we don\\u2019t see the increase there. This bolsters the main point of our paper, because it shows that self-verification improvements can be illusory and not apply generally outside of test distributions. Furthermore, as these two domains are merely translations of each other, they retain the exact same properties from a complexity theoretic perspective, and so we see an even clearer example of how LLMs are not sensitive to this.\", \"this_is_part_of_the_bigger_message_we_are_trying_to_get_across\": \"general verification is a form of reasoning. LLMs are not capable of reasoning--they can however do approximate retrieval based on their pretraining data. Thus they might look like they are reasoning on domains that are likely over-represented in their pretraining data but we see that on identical but slightly perturbed domains which are not well-represented, the performance collapses, and self-verification does not provide the kind of shortcuts previous work has hoped arises.\\n\\n\\n> **I concur that the selected tasks are easier to verify than to generate based on complexity theory. A minor question is, what kinds of tasks are harder to verify? Could the authors provide some examples?**\\n\\nObviously, computational complexity of verification cannot be higher than that of generation. So we interpret your question as \\\"what are some cases where verification complexity is non-polynomial?\\\" A canonical example is Satisfiability of Quantified Boolean Formulas (QBF). QBF can be seen as a generalization of Satisfiability--that underlies our Graph Coloring task. It can be used to express games such as GO and Reversi [1]. In the case of QBF, the satisfiability is P-Space complete, and verification of solution is NP-hard (in contrast to SAT and Graph Coloring, where the generationm is NP-Complete and verification is polynomial)\\n\\nIn the case of planning too, while classical planning is P-Space complete for generation and polynomial for verification, more complex planning problems--such as conformant planning--where the agent doesn't have full observability--is known to be Exp-Space complete for generation and NP-Complete for verification [2] \\n\\n[1] Giunchiglia, Enrico, Paolo Marin, and Massimo Narizzano. \\\"Reasoning with quantified boolean formulas.\\\" In Handbook of satisfiability, pp. 761-780. IOS Press, 2009.\\n\\n[2] Rintanen, Jussi. \\\"Complexity of Planning with Partial Observability.\\\" In ICAPS, vol. 4, pp. 345-354. 2004.\\n\\n> **Task difficulty: I understand the authors selected tasks whose testing accuracy ranges from 4% to 40%, but according to the LLM+LLM results in Table 1, the only outlier is Blocksworld, whose LLM+LLM accuracy is larger than the S.P. accuracy. It also has the highest baseline accuracy (40%), while the others are significantly lower. The experiment would be more comprehensive if, in future studies not now, there are several tasks whose S.P. accuracy is around 50% and several around 80%.**\\n\\nWe thank the reviewer for their suggestion. We believe that the major distinction in terms of accuracy would be how well-represented the reasoning domain could be in the pre-training data. We intend to investigate this in future work.\"}" ] }
4NtrMSkvOy
Enhance the Transferability of Adversarial Attacks through Channel Pruning
[ "Chunghao Liao", "Shang-Tse Chen" ]
Recent studies have shown that neural networks are vulnerable to adversarial attacks, where attackers generate adversarial samples by imposing tiny noise. The tiny noise can not misguide human perception, though leading the neural networks to generate wrong predictions. Transfer-based black-box attacks play a more significant role in recent studies due to their more realistic setting and considerable progress in performance. Previous studies have shown that some different channels of the same layer in convolution neural networks (CNN) contain lots of repetitive information, and we find that existing transferable attacks tend to exploit those redundant features more, which limits their transferability. Hence, we advocate using channel pruning and knowledge distillation to conduct model augmentation. In addition, we introduce a method of regularization on the gradients of intermediate feature maps of augmented models, which further enhances the transferability of our method. Comprehensive experiments demonstrate that imposing our method of model augmentation on existing methods can significantly improve the transferability of adversarial attacks in untargeted or targeted scenarios. Furthermore, our method outperforms state-of-the-art model augmentation techniques without the usage of additional training datasets.
[ "adversarial attacks transferability", "channel pruning", "model augmentation" ]
https://openreview.net/pdf?id=4NtrMSkvOy
https://openreview.net/forum?id=4NtrMSkvOy
ICLR.cc/2025/Conference
2025
{ "note_id": [ "nQiIaZAvvC", "ehttbrPdMk", "eU69pvbo54", "cIxgkcZy39", "JqPZrn0jkr" ], "note_type": [ "official_review", "comment", "official_review", "official_review", "official_review" ], "note_created": [ 1730499630847, 1731993417882, 1730652259725, 1730558053618, 1730681635261 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission4559/Reviewer_g6Th" ], [ "ICLR.cc/2025/Conference/Submission4559/Authors" ], [ "ICLR.cc/2025/Conference/Submission4559/Reviewer_DPG4" ], [ "ICLR.cc/2025/Conference/Submission4559/Reviewer_msvL" ], [ "ICLR.cc/2025/Conference/Submission4559/Reviewer_CDqA" ] ], "structured_content_str": [ "{\"summary\": \"Adversarial examples have recently received much attention in the black-box transfer-based attack scenario due to its more realistic attack setting. To enhance the transferability of the generated adversarial example, the paper introduces GRASP, a model augmentation method that uses channel pruning to generate different models. These different pruned models are used to generate an adversarial example that may be less specific to the potential channel redundancy of the source model.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"Studying the transferability of adversarial examples is an important topic.\", \"The paper is easy to follow.\"], \"weaknesses\": \"-\\t**Experiments are insufficient and do not sufficiently support claims.** The comparison of the proposed method against other model augmentation methods is only done for one source model (ResNet50). Despite the source network being the same for Table 3 and Table 4, the targeted networks are different, which is surprising. The superiority of the proposed method cannot be established. Moreover, since the proposed method uses three pruned networks to generate the adversarial example, I expect the authors to compare their method against an ensemble method to ensure that the gain obtained is not due to ensembling networks. \\u00a0It would be interesting to compare against ensemble-based methods such as [A] and [B].\\n\\n\\n- **Motivation of the method.** In the introduction, the authors say, \\u201cOn the contrary, some\\nof them are vague and seem to only contain weak features about the object in the input image. If\\nwe only conduct adversarial attacks based on original CNN models, the adversarial samples tend to \\u201coverfit\\u201d on those highly repetitive features.\\u201d I do not understand why, if a feature is not useful for predicting an object (whereas this feature may be useful for another object), this feature would be exploited by an attack that tries to fool the network. I would expect the attack to disrupt features responsible for predicting the target class and not those that are not useful. To validate their intuitions, can the authors provide an experiment or proof showing that adversarial examples tend to \\u201coverfit\\u201d those highly repetitive, unuseful features? Moreover, I find it strange that increasing the pruning rate, which is the core of the method, degrades the transferability of the generated adversarial examples. Can the authors clarify this point, please?\", \"typos\": \"- In Table 3 and 4, there is a blank row. \\n- Table 1 and 2 are overextended. \\n\\n[A] Tang, B., Wang, Z., Bin, Y., Dou, Q., Yang, Y., & Shen, H. T. (2024). Ensemble Diversity Facilitates Adversarial Transferability. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 24377-24386).\\n\\n[B] Chen, B., Yin, J., Chen, S., Chen, B., & Liu, X. (2023). An adaptive model ensemble adversarial attack for boosting adversarial transferability. In Proceedings of the IEEE/CVF International Conference on Computer Vision (pp. 4489-4498).\", \"questions\": \"-\\tWhy did you do not discuss in the related work the loss-based transfer methods such as for example [C] or [D]?\\n-\\tFor the contrastive learning, how did you choose these three transformations? Why not other ? \\n\\n[C] Naseer, M., Khan, S., Hayat, M., Khan, F. S., & Porikli, F. (2021). On generating transferable targeted perturbations. In Proceedings of the IEEE/CVF International Conference on Computer Vision (pp. 7708-7717).\\n\\n[D] Zhao, A., Chu, T., Liu, Y., Li, W., Li, J., & Duan, L. (2023). Minimizing maximum model discrepancy for transferable black-box targeted attacks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 8153-8162).\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\", \"comment\": \"Thank you for your efforts in providing helpful reviews and suggestions. We will withdraw this submission and keep refining our work.\"}", "{\"summary\": \"In this paper, the authors propose a model augmentation-based method to improve the transferability of adversarial attacks. To enhance performance in black-box settings, they first introduce the technique of channel pruning to create a self-ensemble surrogate model, which mitigates the overfitting on redundant features or channels. Additionally, the authors integrate both the knowledge distillation and gradient regularization into this ensemble model to further enhance the transferability across various target models. Experiments conducted on multiple CNN and ViT networks using the ImageNet benchmark dataset demonstrate that the proposed model augmentation method achieves relatively high attack performance.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The authors propose using channel pruning to enhance the transferability of adversarial attacks, along with knowledge distillation to recover the classification accuracy of pruned models.\\n\\n2. During the perturbation training, regularization of important feature maps is introduced to reduce the gradient variance to further enhance the attack performance.\\n\\n3. Experimental results on various target models demonstrate that the proposed transfer-based method outperforms baseline methods in attack effectiveness.\", \"weaknesses\": \"1. The motivation for introducing channel pruning is not clearly explained. For example, why was channel pruning selected over other pruning techniques, such as kernel or block pruning? Furthermore, as noted in lines 255-257, the L2-norm distance metric lacks robustness due to its sensitivity to outlier noise. Does this criterion maintain consistent performance across different surrogate models?\\n\\n2. The submitted manuscript lacks certain details regarding initial parameter optimization. For example, the network structure of the self-supervision (SS) module is not specified. Additionally, the authors introduce the L_{ss} loss, as defined in lines 295-296, to maximize the similarity of all positive and negative data pairs, which is inconsistent with standard contrastive learning practices.\\n\\n3. In Section 4.3, the authors propose regularizing gradient variance within an intermediate feature map but do not specify how this layer is selected within the surrogate model. Additionally, as mentioned in lines 322-323, the overall loss function is designed to minimize each loss term including the L_{ce} loss, which may conflict with the goal of crafting adversarial examples by maximizing L_{ce}. Furthermore, the details regarding the optimization of this overall loss function are not clearly explained.\\n\\n4. As shown in Tables 3-4, the experiment is conducted on only one surrogate model, which does not provide a comprehensive evaluation. Furthermore, the proposed GRASP method is not directly comparable to other model augmentation methods, as these methods do not incorporate knowledge distillation or gradient regularization.\\n\\n5. The authors claim that reducing gradient variance can balance the importance of different channels within a layer. However, as shown in Figure 5, the minimal change in attack success rate after regularizing different layers does not provide strong support for this claim.\", \"questions\": \"1. What are the key theoretical differences between pruning techniques like channel, kernel, and block pruning? Do these techniques exhibit different attack performances in both white-box and black-box settings?\\n\\n2. Could other model augmentation-based attack methods be further improved by incorporating the proposed initial parameter optimization and gradient regularization strategies?\\n\\n3. Could the authors provide additional details on how adversarial examples are trained on the ensemble of pruned models? In the overall loss function, does knowledge distillation by minimizing L_{ce} conflict with the perturbation training, which involves maximizing L_{ce}? Additionally, how can the proposed five loss terms be effectively trained, and how should their corresponding hyperparameters be adjusted?\\n\\n4. Could the authors provide additional details on why gradient regularization results in only slight changes in ASR across different layers of the surrogate model? Additionally, why is gradient regularization applied to only one layer rather than multiple layers?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper presents a method for a transfer-based blackbox attack.The main idea for the method is to augment the surrogate models by differently pruned models, and generate the adversarial examples. First to augment the models, the model is pruned with the predetermined channel pruning rate. To train the augmented models, the knowledge distillation is used to match the accuracy of the pruned model, incorporating self-supervision and input augmentation as terms in the loss function. The authors present explanations and experiments to further demonstrate their idea. The experiment is done on the ImageNet-like dataset on targeted and untargeted attacks.\", \"soundness\": \"1\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": \"The paper is easy to read through.\", \"weaknesses\": \"The weakness of the paper is three-folds.\\n- The terminology used in the paper lacks objective justification and includes many subjective and inaccurate expressions.\\n-- See questions.\\n- Lack of Novelty.\\n-- The main methodology comes from the idea of channel-pruning based model augmentation. There are plenty of existing studies that analyze channel-wise pruning in terms of robustness, and numerous studies demonstrate that model augmentation can achieve higher transferability. [1-3] As it stands, there doesn't seem to be much take-away for the reader from the combined ideas in this study.\\n- The overall explanation and experiments are insufficient.\\n-- Lack of Validation to the proposed method.\\nThe experiments are limited to a single dataset, which does not demonstrate generalized results.. \\nhe paper primarily explains the performance of the proposed method through intuition and explanation, but lacks supporting evidence. Adding intermediate empirical or theoretical validations for the proposed method would improve its persuasiveness.\\nTo enhance the persuasiveness of this study, more ablation studies are needed.\\n\\n-[1] Bai et al., \\\"Improving Adversarial Robustness via Channel-wise Activation Suppressing\\\", ICLR 2021\\n-[2] Borkar et al., Defending Against Universal Attacks Through Selective Feature Regeneration, CVPR 2020\\n-[3] Tramer et al., \\\"Ensemble adversarial training: Attacks and defenses\\\", ICLR 2018\", \"questions\": [\"Why are some channels redundant? Why can this explained with over-parameterization? (Line 47.)\", \"Why are the pruned surrogate models containing denser information if initial parameter optimization and regularization applied? (Figure 1)\", \"Why do samples overfit on the highly repetitive channels?\", \"How are the hyperparameters set in 4.3?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes a transferable black-box attack using the concept of model augmentation through channel pruning and knowledge distillation. The authors show that the transferability of existing black-box attacks is limited due to their uneven focus on the channels. The authors also introduce a gradient regularization to enhance the transferability further. The evaluation is done using a subset of 2000 images from ImageNet.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"3\", \"strengths\": [\"The attack is transferable across multiple network architectures, within CNNs as well as from CNNs to transformers.\", \"Intuitive, novel method for increasing transferability of the black-box attacks.\"], \"weaknesses\": [\"The paper does not follow the format guidelines closely. Tables 1 and 2 are out of the paper margin significantly.\", \"The presentation of the paper needs to be improved. The paper contains many grammatical errors, like a period in the middle of the sentences and incorrectly capitalized words. For example, on line 354, it should be \\\"Table\\\" instead of \\\"table\\\"; \\\",,\\\" on line 355; should be \\\"pruning\\\" instead of \\\"running\\\" on line 252; the sentence on lines 226-227 is incomplete; there is a blank line in Table 3 and 4 before the first result row.\", \"What is the size of the test set (out of 2000 images subset) used for evaluation?\", \"On line 329, the authors mention \\\"almost correctly classified by all the evaluated models\\\" about the chosen subset. What are the exact accuracy numbers for every network?\", \"Limitations like an increase in computing due to model augmentation and the trade-off between the transferability and number of models should be discussed.\"], \"questions\": [\"What is the size of the test set (out of 2000 images subset) used for evaluation?\", \"On line 329, the authors mention \\\"almost correctly classified by all the evaluated models\\\" about the chosen subset. What are the exact accuracy numbers for every network?\", \"Refer to weakness section for more comments.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}" ] }
4NsYCAxubi
fPLSA: Learning Semantic Structures in Document Collections Using Foundation Models
[ "Weijia Xu", "Nebojsa Jojic", "Nicolas Le Roux" ]
Humans have the ability to learn new tasks by inferring high-level concepts from existing solution, then manipulating these concepts in lieu of the raw data. Can we automate this process by deriving latent semantic structures in a document collection using foundation models? We introduce fPLSA, a foundation-model-based Probabilistic Latent Semantic Analysis (PLSA) method that iteratively clusters and tags document segments based on document-level contexts. These tags can be used to model the structure of given documents and for hierarchical sampling of new texts. Our experiments on story writing, math, and multi-step reasoning datasets demonstrate that fPLSA tags help reconstruct the original texts better than existing tagging methods. Moreover, when used for hierarchical sampling, fPLSA produces more diverse outputs with a higher likelihood of hitting the correct answer than direct sampling and hierarchical sampling with existing tagging methods.
[ "Natural Language Processing", "Large Language Models", "Document Analysis", "Latent Semantic Analysis" ]
Reject
https://openreview.net/pdf?id=4NsYCAxubi
https://openreview.net/forum?id=4NsYCAxubi
ICLR.cc/2025/Conference
2025
{ "note_id": [ "qt33a8QanY", "nqqDnNYpAU", "mG6nvhtgXH", "m5GULhjf3L", "kK2xTiGAeZ", "UG91bRMO33", "Q606hnnof7", "MGYXJuK2JQ", "Gah2KMzz6Y", "7yBdH1MPpn" ], "note_type": [ "meta_review", "official_review", "official_comment", "official_comment", "official_review", "official_review", "official_review", "official_comment", "decision", "official_comment" ], "note_created": [ 1734709904247, 1730700048676, 1733296941875, 1733296855125, 1730044815690, 1730847057450, 1730667686833, 1733297006531, 1737524190590, 1733297109385 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission12410/Area_Chair_DPX1" ], [ "ICLR.cc/2025/Conference/Submission12410/Reviewer_6feW" ], [ "ICLR.cc/2025/Conference/Submission12410/Authors" ], [ "ICLR.cc/2025/Conference/Submission12410/Authors" ], [ "ICLR.cc/2025/Conference/Submission12410/Reviewer_Jd27" ], [ "ICLR.cc/2025/Conference/Submission12410/Reviewer_pfT6" ], [ "ICLR.cc/2025/Conference/Submission12410/Reviewer_BWXW" ], [ "ICLR.cc/2025/Conference/Submission12410/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission12410/Authors" ] ], "structured_content_str": [ "{\"metareview\": [\"This paper proposes a new method to tag documents based on high-level concepts, using an LM-based Probabilistic Latent Semantic Analysis. The paper shows this can help reconstruct the original texts from tags, and can be used for hierarchical sampling with more diverse outputs.\", \"Strengths\", \"The paper proposes a new way to tag text documents without any supervision (PLSA, 6feW, BWXW, Jd27).\", \"Empirical results are stronger than multiple baselines (PLSA, Jd27).\", \"Weaknesses\", \"Writing needs improvements (PLSA, 6feW, Jd27).\", \"Insufficient technical contribution (PLSA, 6feW).\", \"Setups in the experiments are too synthetic (PLSA).\", \"Insufficient necessary experiments, such as comparison to prior LM-based topic modeling methods (6feW).\", \"Many missing details and ablations (PLSA, 6feW, BWXW, Jd27).\"], \"additional_comments_on_reviewer_discussion\": \"A few clarification was provided during rebuttal, but the changes needed to incorporate them to the paper is significant.\"}", "{\"summary\": \"This paper introduces an improved version of probabilistic Latent Semantic Analysis (pLSA), termed fPLSA (Foundation-Model-Based PLSA), which incorporates Large Language Models (LLMs) to refine the modeling of latent tags in documents for topic modeling. It conducts some experiments to verify the effectiveness of fPLSA.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"(1) Study a classic task\\n\\n(2) Propose a new method\\n\\n(3) conduct some experiments to verify the effectiveness of the proposed method.\", \"weaknesses\": \"(1) Insufficient technical contribution: The method utilizes LLMs for tag Description Generation. Specifically, the fPLSA model generates descriptive tags for the document segments by prompting the LLM with segments assigned to a particular tag to produce a cohesive summary that represents what these segments have in common. The parameters of the LLM are kept frozen during the process. This means the LLM is not fine-tuned during fPLSA training but is used in a static manner. While the integration of LLMs into pLSA offers a novel approach to document modeling, the core statistical methodologies underlying pLSA (like the EM algorithm) remain largely unchanged. This may limit the perceived novelty from a methodological standpoint.\\n\\n(2) Missing necessary experiments: need to involving more baselines that use LLMs for topic modeling, like Pham et al. (2024), Wang et al. (2023) mentioned in the paper. \\n\\n(3) Poor writing: The transition of some contents are abrupt and hard to readers to understand the points, such as the first and second paragraphs in the introduction.\\n\\n(4) Missing Implementation Details: all the prompts used in the experiment are not specified such as those for fPLSA and GenOutline (a baseline)\\n\\n(5) Unclear motivation of the experiment setting: the paper uses GPT-4 for clustering and tagging while using ChatGPT to measure the accuracy. The authors explain it\\u2019s because GPT-4 may have data contamination issues on some benchmarks. I think this explanation is lame and need more clarifications while potentially leading to unfair comparison.\", \"questions\": \"please refer to the weaknesses\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for the insightful feedback!\", \"responses_to_the_weaknesses\": \"1.\\tInsufficient technical contribution: This is the first work (as far as we know) that combines the merits of LLM and traditional topic modeling algorithms, and has shown outstanding improvements by combining these two types of approaches.\\n2.\\tMissing necessary experiments: Pham et al. (2024) and Wang et al. (2023) are both prompt-based topic modeling approaches. We have included a representative prompting method (Mu et al. 2024) in our experiments.\\n3.\\tWriting: We will revise the paper to address the issue.\\n4.\\tMissing Implementation Details: We will update the appendix to include the prompt templates.\\n5.\\tMotivation for using ChatGPT to measure accuracy: While we use GPT-4 in most experiments, we use ChatGPT to generate the actual problem solutions in the accuracy experiment. This is because we notice that the zero-shot accuracy scores of GPT-4 on MATH and BBH benchmarks are very high already (which is possibly due to the data contamination issue [1]) and leave little room for improvements by exploring more diverse solution paths. Thus, we chose to use ChatGPT to generate the actual problem solutions in this experiment, which doesn\\u2019t cause unfair comparison because we compared with other tagging baselines that also use GPT-4 for clustering and tagging and ChatGPT for generating solutions. So the comparison is still fair.\\n\\n[1] Chunyuan Deng, Yilun Zhao, Xiangru Tang, Mark Gerstein, Arman Cohan. Investigating Data Contamination in Modern Benchmarks for Large Language Models. NAACL 2024.\"}", "{\"comment\": \"Thank you for the insightful feedback!\", \"responses_to_the_weaknesses\": [\"1.\\tClarifying some misunderstandings of the algorithm:\", \"The proposed algorithm is iterative, in which the tag \\u201cparameters\\u201d and assignments are iteratively updated, same as the EM algorithm, instead of the three-step procedure as described in the review.\", \"Eq.(4) represents the generative model of the text in each document. Similar to the standard PLSA algorithm, p(d) and p(xk|d) represents the empirical distribution of the documents d and the segments xk in each document, from which we sample the documents and document segments. p\\u0398(t|xk,d) and p\\u0398(w1\\u2026n|t) are optimized through our EM algorithm, which finds the Maximum A Posteriori estimates of \\u0398. We followed the notations and math formulas in the original PLSA paper in our paper.\", \"L153: The parameters being updated in our algorithm are the textual descriptions of the tags, so there is still training. It\\u2019s just that the parameters being updated are discrete tokens.\", \"L157: The tag assignment procedure is done by prompting the LLM with temperature=1, so it\\u2019s still a probabilistic sampling procedure based on the current tag descriptions. We will revise the paper to clarify that.\", \"According to the original PLSA paper, \\u201cTo derive conditions under which generalization on unseen data can be guaranteed is actually the fundamental problem of statistical learning theory.\\u201d In other words, the learned tags can generalize to unseen data under certain conditions based on statistical learning theory.\", \"2.\\tWe set the maximum number of iterations to 30 based on our observations in preliminary experiments that the learned tag descriptions become stable in less than 30 iterations.\", \"3.\\tClarification on the evaluation method: The evaluation method is the same for all baselines and our method. When evaluating both the baseline without tags, baselines using other tagging algorithms, and the tags learned using our algorithm, we prompt the model to solve a multiple-choice problem of picking the ground truth xk from a set of candidate segments.\", \"4.\\tPlease point us to any existing evaluation metrics for document segment tagging, because we could only find existing evaluation metrics for topic modeling of whole documents.\"], \"responses_to_questions\": \"1.\\tPrompt template: We will update the appendix with the prompt template.\\n2.\\tInitialization of the tag descriptions: Initially, we randomly assign some text segments to each tag and prompt the model to summarize their commonalities as the initial tag description.\\n3.\\tSec 4.1 Evaluation Datasets: We learn tags from solution segments only.\\n4.\\tSect 4.1: In Section titled \\u201cEvaluation Datasets\\u201d, we describe the datasets used for both learning the tags and evaluating the quality of these tags.\\n5.\\tL195: As described in L196, we sample the alternative segments from the LLM given the previous segments in the same document.\\n6.\\tL188: On the story dataset (WritingPrompts), the test documents refer to the stories themselves. On MATH, the test documents refer to solution texts.\\n7.\\tIn the Hits@K evaluation: (a) The tag sequence is generated randomly without given the test example/question. (b) The model predicts the answer based on the tag sequence in one prompting call. (c) We measure if a sampled solution is correct or not by checking if the final answer in the solution is the correct answer. (d) We need diversity in the generated outputs because on challenging reasoning tasks, we may need to search through the output space to find a better answer, and the searching algorithms wouldn\\u2019t work if the generated outputs are all similar. To this end, we measure if sampling with our tags can help generate more diverse solution paths in the way that increases the chance of finding a correct solution among all sampled outputs.\"}", "{\"summary\": \"### [Updates 12/04/2024]\", \"to_anyone_reading_the_reviews\": \"The authors published all their responses in the final hours of the rebuttal period (midnight EST). When I tried to respond just now, I realized I can no longer make my response public, so I'm updating my official review here. To clarify, **it was the authors who did not engage during the rebuttal period, not the reviewers**.\\n\\n### Original Content\\nThe paper introduces fPLSA, a foundation-model-based extension of Probabilistic Latent Semantic Analysis (PLSA), aimed at discovering semantic structures within document collections through clustering and tagging of text segments. Unlike traditional topic modeling, which often relies on word co-occurrences, fPLSA leverages Large Language Models (LLMs) to understand segment-level semantics in a broader document context. It applies an Expectation-Maximization (EM) algorithm to iteratively refine segment tags, enhancing both text reconstruction accuracy and hierarchical sampling. Experimental results on datasets for story writing, math, and reasoning show that fPLSA significantly outperforms traditional and LLM-based tagging methods in text reconstruction and solution diversity. This makes it suitable for generating effective problem-solving guides and diverse text outputs across varied tasks.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"The article is generally well-written except for the technical part, which I find somewhat confusing. According to the article, fPLSA's strengths lie in its enhanced semantic understanding, leveraging LLMs for capturing nuanced document structures beyond lexical co-occurrence. This approach yields more accurate text reconstruction and supports hierarchical sampling, producing diverse, high-quality outputs in applications like story generation and problem-solving. Its specific and detailed tagging outperforms generic LLM-based tags, enhancing content generation. Additionally, fPLSA\\u2019s unsupervised clustering reduces the need for labeled data, while its demonstrated adaptability across domains and improved Hits@K accuracy make it a versatile, efficient tool for semantic analysis and structured text generation.\", \"weaknesses\": [\"The usage of math symbols is sometimes confusing. Also not required, it is suggested that the authors follow the [default notation](https://github.com/ICLR/Master-Template/raw/master/iclr2025.zip) for a clearer presentation of the equations.\", \"The proposed method is not thoroughly explained. For example, the computation of some terms in (4) is missing, as well as its optimization algorithm, e.g., how to calculate $p(x_k|d)$. From my perspective, if one chooses to express the idea using math formulas, then every term should be clearly explained except for the cases where it is extremely obvious, which I think does not apply to (4).\", \"A figure or algorithm may better explain the proposed method.\", \"The authors use GPT-4 for clustering and tagging but GPT-3.5 for response generation and did not provide experimental results on other combinations. The performance of the proposed method therefore may not be universally applicable.\", \"Potential data leakage issues (detailed in Questions).\", \"Overall, I think the approach proposed by this article is rather straightforward and could be easily described with better clarity without introducing any formulae, perhaps except for the motivation part. In addition, it seems that this article may find a broader audience in the pure NLP community rather than a mixed community of different machine learning topics. Therefore I would recommend to submitting this manuscript to ACL ARR venue instead of machine learning conferences.\"], \"questions\": [\"I'm a bit confused by the relation between w, x, and d. If $w_{1:n}=x_k\\\\subset d$, how is $p(x_k|d)$ modeled in (4)? Why is it necessary to include both $x_k$ and $d$ as conditional terms?\", \"Context window: I'm not sure I understand how the segments are selected in this article. Is a segment a fixed-length sequence of tokens? Are there any overlaps between different segments? In Line 238, the authors mentioned \\\"we use a context window size of 2 on WritingPrompts and use unlimited context window\\\". How should the \\\"unlimited context window\\\" be interpreted?\", \"According to [1], the latent variable ($z$ in [1]) is supposed to be categorical. This article borrowed the same concept from [1] but I'm not sure whether this article follows the original setup. The authors did mention that they \\\"set the number of tags to 100\\\", but the example tags in Table 3 showed that the tags are natural language descriptions rather than categorical labels. I wonder how the tags are generated, and if calling it \\\"latent\\\" is still appropriate.\", \"In (5), $t_k$ is sampled condition on $x_k$, which is later used to estimate the probability of reconstructing $x_k$. Is this a typo? Doesn't this lead to data leakage and make the results of (5) unfairly high?\", \"For BBH, I'm not sure why it is necessary to \\\"use the step-by-step solutions produced by their automatic Chain-of-Thought\", \"prompt inference algorithm for clustering and tagging\\\". Does it mean that a part of the (ground-truth) solutions is utilized as the prompt to the model for problem-solving? I think this is a huge data leakage issue and would greatly undermine the soundness of the evaluation of the proposed method.\", \"Since tag generation is a recursive process, what would the token consumption be for achieving the presented results? How about the baseline models?\", \"[1] Hofmann, T. \\\"Probabilistic latent semantic indexing.\\\" Proceedings of the 22nd annual international ACM SIGIR conference on Research and development in information retrieval. 1999.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper studies the problem of discovering text segments that share common characteristics and assigning them the same tag description. It proposes an LLM-based method that iteratively assigns tags to document segments and improves each tag description based on the segment cluster. The authors aim to show that these tags are helpful for a reconstruction task and in improving \\u201cHits@K\\u201d accuracy in evaluation sets created from WritingPrompts, MATH, and the BBH benchmark.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"This work proposes to discover \\u201ctags\\u201d for text segments in an unsupervised fashion and a novel algorithm that is inspired by the probabilistic latent semantic analysis (PLSA).\", \"The algorithm leverages the ability of an LLM to analyze textual materials and is able to find detailed and meaningful tags, as shown in the qualitative results of the paper.\", \"The paper show favorable empirical results compared to multiple baselines: traditional latent Dirichlet allocation, it variant + LLM, prompting, and chain-of-thought prompting.\"], \"weaknesses\": [\"1. The writing in this paper is often too generic and high-level. For example, when a reader reads the motivation at L011-L014 \\u201cHumans have the ability to learn new tasks by inferring high-level concepts from existing solutions, then manipulating these concepts in lieu of the raw data. Can we automate this process by deriving latent semantic structures in a document collection using foundation models?\\u201d, they may wonder:\", \"What tasks do you mean?\", \"Existing solutions of the \\u201cnew tasks\\u201d or other relevant tasks?\", \"What does it mean to manipulate high-level concepts?\", \"How do you define \\u201csemantic structures\\u201d in a document collection? It\\u2019s not precise to describe a set of \\u201ctags\\u201d as a structure.\", \"2. The novelty of the method is limited and its connection to PLSA and EM is loose. The proposed algorithm is simple: (1) Initialize a certain number of tag descriptions. (2) Prompt an LLM to assign a tag to each document segmentation based on the tag descriptions. (3) Let an LLM generate a new tag description that describes the shared characteristics of the segments in this cluster.\", \"The main Eq. (4) is actually not used: $p(d)$, $p(x_k|d)$, $p_\\\\Theta(t|x_k,d)$, and $p_\\\\Theta(w_{1\\\\dots n}|t)$ are not computed.\", \"L153: The parameters $\\\\theta_t$ are textual descriptions instead of floating-point parameters and no training is happening.\", \"L157: No probability distribution is involved. An LLM is employed to greedily perform the steps in the algorithm.\", \"PLSA is a generative model\\u00a0of the training\\u00a0documents\\u00a0that it is\\u00a0estimated on, and it is not a generative model of new documents. But this paper aims to find tags that apply to unseen examples.\", \"3. While the convergence criteria matters for an EM algorithm, this paper simply sets the number of iteration to 30. Not enough analyses is perform on the impact of the number of iterations.\", \"4. In the reconstruction experiments the method based on learned tags solves a multiple choice problem of picking the ground truth $x_k$ from a set of candidate segments. However, baselines such as prompting in Eq. (7) requires a language model to generate the ground truth $x_k$. These seem not comparable.\", \"5. Although the experiment results are positive compared to the baselines, the setups are synthetic. Would be nice to see the application of this algorithm to achieve competitive results according to standard evaluation metrics of the used datasets, which are common benchmarks.\", \"6. Many details are missing. See the Questions below.\"], \"questions\": \"1. What prompt templates are used in various experiments?\\n2. How are the textual descriptions $\\\\theta_t$ initialized in the algorithm? How can the initial tag descriptions meaningfully be assigned to the segments?\\n3. Sec 4.1 Evaluation Datasets: How do you convert the input query and the output answer of each example into segments? Do you learn tags only for segments of answers, but not queries/prompts/questions?\\n4. Sect 4.1: This is titled Evaluation Datasets but indeed describes data for clustering and tagging.\\n5. L195: How do you sample alternative segments?\\n6. L188: What do you mean by the test documents? The datasets are query-answer examples.\\n7. In the Hits@K evaluation:\\n(a) Do you first generate the tag sequence based on the input of a test example or randomly? \\n(b) Does a model predict an answer based on the tag sequence in one prompting call?\\n(c) How do you evaluate if a sampled solution is correct or not?\\n(d) Why do you say the proposed algorithm improves diversity in outputs, which is not evaluated? In fact, diversity is neither necessary nor sufficient for a model to perform a reasoning task well.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper introduces fPLSA (foundation-model-based Probabilistic Latent Semantic Analysis), a novel approach for identifying latent semantic structures in document collections by combining traditional PLSA with the contextual understanding of large language models (LLMs). fPLSA enhances probabilistic clustering and unsupervised topic modeling by assigning semantic \\\"tags\\\" to document segments through an iterative Expectation-Maximization (EM) process, where each tag captures both local meaning and broader document context. This structured tagging approach enables fPLSA to better capture complex segment relationships, making it valuable for hierarchical sampling, document analysis, and potentially other downstream tasks such as structured summarization. The paper demonstrates fPLSA\\u2019s effectiveness across diverse datasets\\u2014narrative (story writing), problem-solving (math), and multi-step reasoning\\u2014showing improvements in text reconstruction likelihood and Hits@K accuracy, underscoring its robustness and versatility.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"**Innovative Approach:** fPLSA is a well-conceived combination of probabilistic topic modeling and LLM-based embedding, creating a tagging system that captures both low- and high-level semantics. This approach enables a nuanced understanding of document structure that extends beyond traditional methods, addressing complex relationships within text segments.\", \"**Diverse Evaluation:** The method is rigorously evaluated across multiple datasets, including narrative, mathematical, and multi-step reasoning tasks, demonstrating consistent performance improvements in text reconstruction and sampling diversity. This diversity in datasets reinforces the robustness and generalizability of the approach.\", \"**Potential for Cross-Domain Applications:** fPLSA\\u2019s ability to structure and tag text meaningfully is a powerful tool for hierarchical content generation, segmentation, and structured summarization, with substantial applications across various domains, such as education, content generation, information retrieval, and summarization.\", \"**Foundation for Future Research in Unsupervised Document Tagging:** fPLSA provides a strong foundation for future work in unsupervised document tagging and text segmentation. Its hierarchical tagging approach encourages further exploration in transfer learning, document summarization, and adaptive segmentation, inspiring new research directions for improved document understanding and organization.\"], \"weaknesses\": [\"**Single-Document Applicability:** fPLSA heavily relies on cross-document patterns during training, which is not fully addressed in terms of single-document use cases. At test time, users often only have one document. It would be beneficial to clarify how fPLSA\\u2019s pre-trained tags would generalize to individual documents without access to cross-document patterns. For instance, can the model effectively apply pre-learned tags from similar training data to new documents?\", \"**Lack of Efficiency Analysis:** Given fPLSA\\u2019s reliance on LLMs, a discussion on computational efficiency would be valuable. While LLMs are powerful, they are computationally expensive. Addressing the practical feasibility of deploying fPLSA at scale (or proposing more efficient variations) would make the paper\\u2019s findings more actionable.\", \"**Potential LLM Biases:** Since fPLSA uses pre-trained LLMs to assign tags, there is a risk of encoding biases from the LLM's training data into the tags. The authors could explore ways to mitigate or assess the impact of these biases, especially for datasets or domains sensitive to fairness and accuracy.\", \"**Segmentation Granularity:** The paper does not discuss how sensitive fPLSA is to the choice of segment granularity (e.g., sentence, paragraph) and whether different segmentation approaches yield more cohesive or meaningful tags. Further examination of this could provide clarity on best practices for applying fPLSA across different document types and tasks.\", \"**Potential for Downstream Applications:** Although the paper\\u2019s results demonstrate fPLSA\\u2019s effectiveness in hierarchical sampling, the model's broader potential in downstream tasks is not explored. Given the rich, hierarchical nature of fPLSA tags, they could be valuable for applications like multi-level text summarization, where each tag could represent a theme or section for summarization. Exploring these applications would broaden fPLSA\\u2019s impact.\"], \"questions\": [\"How would fPLSA perform when applied to a single document at test time, especially if that document differs significantly in structure or content from the training set? Can pre-learned tags from similar training data reliably generalize to new documents in such cases, or would fine-tuning on representative samples be necessary to improve performance?\", \"Given fPLSA\\u2019s structured tagging capabilities, could the authors discuss its applicability to downstream tasks like structured text summarization and content retrieval? Prior research on text summarization with text segmentation has demonstrated that segmenting texts by themes can enhance summarization quality [1]. Could fPLSA\\u2019s tags be similarly used to segment and summarize each thematic section, creating a coherent multi-level summary? Additionally, might these tags support content retrieval or indexing by allowing documents to be searchable by thematic segments? Including a brief paragraph on such applications could highlight the contribution's versatility.\", \"Can the authors provide insights into fPLSA\\u2019s computational cost compared to the baselines? For instance, would a less resource-intensive model (like a smaller language model) yield competitive results without the same computational burden?\", \"How sensitive is fPLSA to the choice of segment granularity (sentence, paragraph, etc.)? In testing, did certain segmentation approaches yield more cohesive or meaningful tags, and if so, could the authors elaborate?\", \"Since pre-trained LLMs may encode biases, did the authors observe any potential bias issues during fPLSA\\u2019s tagging process? If so, what mitigation strategies might they recommend for fair and balanced tag generation?\", \"**References**\", \"1. Semantic Self-Segmentation for Abstractive Summarization of Long Documents in Low-Resource Regimes. AAAI 2022.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for the insightful feedback!\", \"responses_to_the_questions\": \"1.\\tHow would fPLSA perform when applied to a single document at test time? Tags learned from the training documents through fPLSA can be directly applied to single document at test time, given that the test document shares similar characteristics with the training documents (e.g. if they are all math solutions or political speech documents).\\n2.\\tApplicability to downstream tasks like structured text summarization and content retrieval: Great point. Our algorithm can indeed be applied to improve text summarization and content retrieval as suggested.\\n3.\\tComputational cost: The exact token consumption depends on the document lengths, but the number of LLM calls for our algorithm is roughly (number_of_tags * number_of_iterations * 2 + number_of_segments), while the number of LLM calls for the prompting baseline is roughly (number_of_segments + 1). So if the total number of segments in the document collection is large, the additional computational cost of our algorithm would be relatively small compared to the overall cost.\\n4.\\tChoice of segment granularity: fPLSA can yield meaningful segment tags given that each segment contains meaningful information. For instance, on the story dataset, we segment the stories by paragraphs, while on the math solution dataset, we segment the solutions by sentences.\\n5.\\tPotential biases from LLMs: LLM may introduce model bias in the learned tags, although we didn\\u2019t observe any on our current evaluation datasets. We can potentially apply existing bias mitigation methods in our algorithm, which we leave for future work.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"Thank you for the insightful feedback!\", \"responses_to_the_weaknesses\": \"1.\\tAlgorithm description: We will revise the algorithm description part to better explain the algorithm.\\n2.\\tGPT-4 for clustering and tagging but GPT-3.5 for response generation: While we use GPT-4 in most experiments, we use ChatGPT to generate the actual problem solutions in the accuracy experiment. This is because we notice that the zero-shot accuracy scores of GPT-4 on MATH and BBH benchmarks are very high already (which is possibly due to the data contamination issue [1]) and leave little room for improvements by exploring more diverse solution paths. Thus, we chose to use ChatGPT to generate the actual problem solutions in this experiment, which doesn\\u2019t cause unfair comparison because we compared with other tagging baselines that also use GPT-4 for clustering and tagging and ChatGPT for generating solutions. So the comparison is still fair.\", \"responses_to_the_questions\": \"1.\\tp(xk|d) refers to the distribution from which we randomly sample a text segment from a document. Empirically, we just randomly sample a segment index k from a uniform distribution over 1 to n, where k indicates which segment is the current one. We will revise the paper to better explain it.\\n2.\\tSegment length and context window: In terms of segment length, we take each paragraph as a segment for stories and each sentence as a segment for problem solutions. There is no overlap between segments. In terms of context window, it refers to the neighboring text segments to the current segment that we provide as additional context for tagging and clustering. A context window size of two means that we provide two neighboring text segments as context. And unlimited context window means that we provide all the other text segments in the same document as context.\\n3.\\tCategorical tags: The tags in our algorithm are also categorical. At the tag assignment step, the LLM is given the text information and all tags and is asked to choose the most suitable tag for the current text segment. The main difference between our algorithm and traditional PLSA is that, in PLSA, each tag is represented by a probability distribution over documents, while in our algorithm, each tag is represented by a textual description of a cluster of documents.\\n4.\\tReconstruction likelihood: In (5), we measure the reconstruction likelihood, i.e. how well the learned tags help reconstruct the original text xk. This is a common way to evaluate latent variable models like VAE in both NLP and Vision. \\n5.\\tThere is no data leakage in the evaluation. We learn the tags from step-by-step solutions from the training set and measure the Hits@K accuracy on a separate test set in which the problems are unseen. \\n6.\\tComputational cost: The exact token consumption depends on the document lengths, but the number of LLM calls for our algorithm is roughly (number_of_tags * number_of_iterations * 2 + number_of_segments), while the number of LLM calls for the prompting baseline is roughly (number_of_segments + 1). So, if the total number of segments in the document collection is large, the additional computational cost of our algorithm would be relatively small compared to the overall cost.\\n\\n[1] Chunyuan Deng, Yilun Zhao, Xiangru Tang, Mark Gerstein, Arman Cohan. Investigating Data Contamination in Modern Benchmarks for Large Language Models. NAACL 2024.\"}" ] }
4NgxI6Z74n
Memory-Efficient Self-Supervised Contrastive Learning with a Supervised Loss
[ "Eric Gan", "Baharan Mirzasoleiman" ]
Contrastive Learning (CL) is among the most popular methods for self-supervised representation learning. However, CL requires a large memory and sample size and careful hyperparameter tuning. These factors make it difficult to learn high-quality representations with limited amount of memory. In this work, we theoretically analyze a recently proposed \textit{supervised} approach, DIET, for self-supervised representation learning. DIET labels every example by its datum index and trains on the labeled data with a supervised loss. DIET does not require a large sample size or hyperparameter tuning. However, it falls short when using smaller encoders and is memory intensive due to its massive classifier head. Given its remarkable simplicity, it is not obvious whether DIET can match the performance of CL methods, which explicitly model pairwise interactions between augmented examples. We prove that, perhaps surprisingly, for a linear encoder DIET with MSE loss is equivalent to spectral contrastive loss. Then, we prove that DIET is prone to learning less-noisy features and may not learn all features from the training data. We show feature normalization can provably address this shortcoming and use of a projection head can further boost the performance. Finally, we address the scalability issue of DIET by reducing its memory footprint. The modified approach, namely S-DIET, substantially improves on the linear probe accuracy of DIET across a variety of datasets and models and outperforms other SSL methods, all with limited memory and without extensive hyperparameter tuning. This makes S-DIET a promising alternative for simple, effective, and memory-efficient representation learning.
[ "contrastive learning", "self-supervised learning", "representation learning", "machine learning theory" ]
Reject
https://openreview.net/pdf?id=4NgxI6Z74n
https://openreview.net/forum?id=4NgxI6Z74n
ICLR.cc/2025/Conference
2025
{ "note_id": [ "rKSUuEWNYN", "qxzPxmNE9N", "ofwGNnTZm5", "nQJJE3566c", "kos1kYqeaE", "kjdxj1KCBR", "ge1NYDUK9F", "gDwOZB01QV", "fNJ0buYzF2", "fM3ie5KV5R", "aSUxUT6Nxz", "Z3yJftlII9", "Vszt0p3wHt", "LggszqgvHT", "LSvba6DLEi", "HLwjmTzooa", "HLtJgmVoBG", "Bw5y1KANau", "Buw6JHWcGS", "4X8V2lFySi", "1Y5Iv0Shcj" ], "note_type": [ "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_review", "comment", "official_comment", "official_comment", "official_comment", "comment", "official_comment", "official_review", "official_review" ], "note_created": [ 1732477635532, 1730388718446, 1732147477565, 1732147748448, 1732499443861, 1737523697392, 1732477599871, 1732477650331, 1734404606892, 1732477617094, 1732594988934, 1732504380355, 1730621728692, 1729700173104, 1732146871863, 1732147177551, 1732147113324, 1729787765536, 1732146709873, 1730715363189, 1730194834025 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission5309/Authors" ], [ "ICLR.cc/2025/Conference/Submission5309/Reviewer_umd8" ], [ "ICLR.cc/2025/Conference/Submission5309/Authors" ], [ "ICLR.cc/2025/Conference/Submission5309/Authors" ], [ "ICLR.cc/2025/Conference/Submission5309/Reviewer_nqvr" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission5309/Authors" ], [ "ICLR.cc/2025/Conference/Submission5309/Authors" ], [ "ICLR.cc/2025/Conference/Submission5309/Area_Chair_x659" ], [ "ICLR.cc/2025/Conference/Submission5309/Authors" ], [ "ICLR.cc/2025/Conference/Submission5309/Reviewer_umd8" ], [ "ICLR.cc/2025/Conference/Submission5309/Reviewer_cXwx" ], [ "ICLR.cc/2025/Conference/Submission5309/Reviewer_nqvr" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission5309/Authors" ], [ "ICLR.cc/2025/Conference/Submission5309/Authors" ], [ "ICLR.cc/2025/Conference/Submission5309/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission5309/Authors" ], [ "ICLR.cc/2025/Conference/Submission5309/Reviewer_T8fX" ], [ "ICLR.cc/2025/Conference/Submission5309/Reviewer_cXwx" ] ], "structured_content_str": [ "{\"comment\": \"We hope our rebuttal has addressed your concerns. As we are getting close to the end of the discussion session, we would like to reach out and see if you had a chance to read our rebuttal and if there is anything else we can further clarify? We are looking forward to your response.\"}", "{\"summary\": \"The manuscript studied the property of a supervised representation learning, called DIET, and then propose an improved version of this method. Specifically, the authors show the equivalence between DIET and spectral contrastive loss proposed by HaoChen \\\\& Ma under a linear case. In addition, the improvement is motivated by the insight derived from the model introduced in the setting of Section 5.1. Although it looks interesting when all the strong assumptions are true, it is unclear if the results presented in the manuscript can provide guidance in the practical settings.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The connection between DIET and spectral contrastive loss is interesting.\", \"weaknesses\": \"The analysis is built on some unrealistic assumptions. It is unclear whether the results and development in the manuscript can hold in practice. It could be more meaningful to make assumptions more carefully.\", \"questions\": \"1) The review in the manuscript is clearly incomplete. Some important literature are missing. The author may want to have a more comprehensive literature review.\\n\\n2) The analysis in the Section 4 is not surprising due to the linear setting. Can the analysis be extended to the nonlinear case? The author may read the recent development under the nonlinear setting in Wang 2023. \\n\\n3)Assuming $W_H$ is an isometry seems a very strong assumption. Do the authors put a constraint for $W_H$ in the loss function to enforce such isometry? Otherwise, the authors may consider removing this assumption.\\n\\n4)The form of training example introduced in Section 5 is unrealistic. When does this assumption hold (at least approximately) in piratical example? Why are there two features, one is low noise and the other is high noise?\\n\\n5)Can we generalize the results in Theorem 5.1 to a more realistic setting?\\n\\n6) The idea in memory-efficient DIET is straightforward.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We thank the reviewer for their comments on our paper. While we have detailed the contributions of our work in the general comment [(link)](https://openreview.net/forum?id=4NgxI6Z74n&noteId=Buw6JHWcGS), we would like to address the reviewer\\u2019s specific remarks in detail here.\\n\\nWhile the reviewer raises some concerns about the theoretical assumptions made in our work, we note that our assumptions are commonly used in the machine learning theory literature. Moreover, our empirical results in Section 6 confirm the validity of our results in practical scenarios. We hope to address each of the reviewer\\u2019s points in detail below:\\n\\n1. While we attempted to provide a broad overview of existing work on contrastive self-supervised learning (Chen et al., 2020; He et al., 2020; Oord et al., 2018; Grill et al., 2020; Chen & He, 2021; Zbontar et al., 2021; Peng et al., 2022; Yang et al., 2022; Dwibedi et al., 2021) and the theory behind it (Wang & Isola, 2020; Graf et al., 2021; Arora et al., 2019; HaoChen et al., 2021; Lee et al., 2021; Tosh et al., 2021; Wen & Li, 2021; Ji et al., 2021; Saunshi et al., 2022; HaoChen & Ma, 2022; Xue et al., 2023; Xue et al., 2024; Balestriero, 2023; Murphy, 2022), it is possible that we missed some papers in the literature review. If the reviewer has any specific papers that they feel must be included, we would be happy to add those in the revised version.\\n\\n2. Indeed, the analysis in Section 4 is surprising even in the linear case, given that the contrastive loss, which is based on the similarity between pairs of examples, and DIET, which is based on classifying examples based on pseudolabels, look vastly different. Additionally, to our knowledge, ours is the first fully rigorous equivalence between contrastive learning and supervised learning. While the exact equivalence does not hold for nonlinear models, our experimental results in Section 6 and Appendix D suggest that the equivalence approximately holds. Exploring this connection rigorously would be an interesting idea for future work. We were not able to identify which paper is Wang 2023, if the reviewer can provide a full reference we would happily provide a comparison.\\n\\n3. We do not put an explicit constraint to enforce that $W_H$ is an isometry. While $W_H$ is unlikely to be an isometry in practice, [1] showed in theoretical and empirical examples that a linear projection head only performs feature rescaling, which is related to the phenomenon of neural collapse [2]. Thus our results hold up to rescaling even if we do not explicitly enforce that $W_H$ is an isometry. We have added this discussion in the revised manuscript.\\n\\n4. Our data model in Section 5 is a variant of the sparse coding model, which has been widely used in previous work [1,3,4,5]. The use of a low noise feature and a high noise feature is to show that standard DIET learns mostly the low noise feature but DIET with normalization can learn both features. As a simple example, when identifying animals, say dogs versus birds, a low noise feature for birds could be the presence of wings (all birds have wings), while a high noise feature could be feather color, and the background would be noise. Besides, the ablations in Section 6 validate the effectiveness of normalization on more complicated real-world datasets.\\n\\n5. The data model in Section 5 can be generalized in a few directions, for example, by having more features per class or having a variable number of features in each example. However, this makes the analysis more tedious without providing any extra insights, so we chose the simple setting for clarity.\\n\\n6. The goal of our work is to show that our relatively simple method S-DIET can match the performance of CL and more complicated SSL methods. The fact that such a simple method can achieve state-of-the-art performance while being more memory-efficient is indeed surprising and we believe is worth sharing with the wider community. Meanwhile, more advanced modifications that further improve performance are left to future work.\\n\\n[1] Yihao Xue, Eric Gan, Jiayi Ni, Siddharth Joshi, and Baharan Mirzasoleiman. Investigating the benefits of projection head for representation learning. 2024.\\n\\n[2] Papyan, Vardan, X. Y. Han, and David L. Donoho. Prevalence of Neural Collapse during the Terminal Phase of Deep Learning Training. 2020.\\n\\n[3]: Zixin Wen and Yuanzhi Li. Toward understanding the feature learning process of self-supervised contrastive learning. 2021.\\n\\n[4]: Zou, D., Cao, Y., Li, Y., and Gu, Q. Understanding the generalization of adam in learning neural networks with proper regularization. 2021.\\n\\n[5]: Chen, Y., Huang, W., Zhou, K., Bian, Y., Han, B., & Cheng, J. Understanding and Improving Feature Learning for Out-of-Distribution Generalization. 2023.\"}", "{\"comment\": \"We thank the reviewer for their positive feedback about the theoretical and empirical contributions of our work and for their comments. We refer the reviewer to our general comment [(link)](https://openreview.net/forum?id=4NgxI6Z74n&noteId=Buw6JHWcGS) for a broader explanation of our work, but we hope to address the reviewer\\u2019s specific questions below:\\n\\n1. To clarify, the motivation for this work is not to address the reliance on large datasets or the drawback of smaller encoders. The main motivation for the paper is that the paradigm of pairwise losses that currently dominate the SSL landscape suffers from practical difficulties\\u2014specifically regarding memory: (1) pairwise losses require maintaining multiple views of each example, (2) CL requires large batch sizes, and thus large GPU memory requirements. Using pairwise similarities is indeed a design choice, but we showed that this is not required and much simpler methods can learn high-quality representations. Specifically, we show that theoretically the spectral contrastive loss and DIET with MSE loss are equivalent for linear models, and empirically S-DIET can match the performance of SSL methods. This demonstrates that supervised methods are expressive enough to be used in place of pairwise losses while avoiding the difficulties with the latter, such as high memory usage. To our knowledge, both our results are novel in the literature. We hope this clarifies the motivation for this work and have clarified this in the updated version.\\n\\n2. We proposed a memory-efficient method for representation learning (S-DIET). Feature normalization and projection head are common practices in CL literature that we applied to our memory-efficient DIET to further boost its performance, and confirm that S-DIET achieves a comparable or superior performance to CL. While the purpose of these additions is not to reduce memory usage, they are essential to obtain optimal performance. The memory efficiency of our method comes from the fact that we replaced complicated pairwise losses with a simple supervised loss and that we no longer need to maintain a massive classifier head in memory. We will clarify this in the revised version.\\n\\n3. We note that DIET does not require labels, the labels used in the DIET loss are pseudo-labels that are simply the datum index. Thus the setting for few shot SSL methods is different from the fully self-supervised setting of DIET, and it is not immediate how to apply the existing analysis in this new setting. Seeing whether the analysis can be extended to these methods would be interesting but is beyond the scope of the current work. In the existing setting, our theoretical and empirical results are, to our knowledge, the first of their kind.\\n\\n4. We thank the reviewer for the suggestion. We have included the following pseudocode for a single training step in the revised manuscript (page 27).\\n```\\n\\\"\\\"\\\"\\nUppercase variables stored on disk\\nLowercase variables stored in memory\", \"x\": \"train data\", \"h\": \"classifier head\", \"m\": \"first moment for classifier head\", \"v\": \"second moment for classifier head\", \"indices\": \"indices for the current batch\\n\\\"\\\"\\\"\\ndef train_step(X, H, M, V, indices, model, criterion, optimizer):\\n # Load data, head weights, and head optimizer state into memory\\n inputs, head, optimizer_m, optimizer_v = X[indices], H[indices], M[indices], V[indices]\\n labels = [1, 2, ..., len(indices)]\\n \\n # Forward and backward pass\\n outputs = head(model(inputs))\\n loss = criterion(outputs, labels)\\n optimizer.zero_grad()\\n loss.backward()\\n optimizer.step()\\n head, m, v = perform_multistep_adamw_head_update(head, m, v)\\n\\n # Save head weights and head optimizer state\\n # Done asynchronously\\n H[indices], M[indices], V[indices] = head, m, v\\n\\n\\ndef perform_multistep_adamw_head_update(head, m, v):\\n g = head.grad\\n\\n # first step\\n head = (1 - lr * weight_decay) * head\\n m = beta1 * m + (1 - beta1) * g\\n v = beta2 * v + (1 - beta2) * g * g\\n head = head - lr * m / (sqrt(v) + eps)\\n\\n # all other steps\\n mu = beta1 / sqrt(beta2)\\n alpha1 = (1 - lr * weight_decay) ** (t - 1)\\n alpha2 = (alpha1 * lr * mu - lr * (mu ** t)) / (1 - lr * weight_decay - mu)\\n \\n head = alpha1 * head - alpha2 * m / (sqrt(v) + eps)\\n m = (beta1 ** (t - 1)) * m\\n v = (beta2 ** (t - 1)) * v\\n\\n```\\n5. We thank the reviewer for pointing this out, we have updated the formatting of the paper.\"}", "{\"title\": \"Thanks for the reply\", \"comment\": \"Thank you for your response. While it provides some clarification, it does not fully address the concerns I raised. First, the updated pseudocode is somewhat helpful for understanding the methodology. Second, I acknowledge the authors' statement that the theoretical foundation is based on simplified assumptions. However, the revised version still fails to resolve the lack of logical coherence, which remains my primary concern. Addressing this issue, in my view, would require a major revision to the manuscript. As I noted in W1, the current outline attempts to address too many issues without focusing sufficiently on a single core problem.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"We hope our rebuttal has addressed your concerns. As we are getting close to the end of the discussion session, we would like to reach out and see if you had a chance to read our rebuttal and if there is anything else we can further clarify? We are looking forward to your response.\"}", "{\"comment\": \"We hope our rebuttal has addressed your concerns. As we are getting close to the end of the discussion session, we would like to reach out and see if you had a chance to read our rebuttal and if there is anything else we can further clarify? We are looking forward to your response.\"}", "{\"metareview\": \"The paper studies DIET, a method for self-supervised representation learning by labeling each sample as a distinct class and training a classifier as in standard classification networks. The main contributions include\\n\\n- Theoretical Insight: The authors prove that DIET with a linear encoder trained with MSE loss is equivalent to the spectral contrastive loss, connecting supervised losses to contrastive learning frameworks.\\n- Feature Learning Limitation: DIET tends to prioritize learning less noisy features but may miss others. The authors demonstrate that feature normalization mitigates this issue, and a projection head further enhances performance.\\n- Memory Efficiency: S-DIET addresses DIET's memory inefficiency by storing the classification head on disk instead of memory. This significantly reduces GPU memory consumption.\\n- Empirical Results: S-DIET achieves state-of-the-art performance on benchmarks like CIFAR-10 and ImageNet-100 with lower memory usage and no extensive hyperparameter tuning.\\n\\nReviewers generally recognize the importance of the problem, appreciate the theoretical insights and the practicality demonstrated through experiments. The main concerns are the following:\\n\\n- Unrealistic Assumptions: The theoretical analysis relies on linear model setting and specific data models, questioning the applicability to more general, practical cases. \\n- Fragmented Presentation: Reviewer nqvr criticized the paper for addressing multiple issues (theoretical equivalence, feature learning, and memory efficiency) in a \\\"fragmented manner,\\\" which weakens the paper\\u2019s logical flow. Similar concern was raised by Reviewer cXwx who complained about fragmented conclusions and lack of motivation.\\n\\nThese concerns indicate that the paper requires improved writing to make the motivation and conclusion clearer to the audience of the conference and possibly more effort on the theoretical study. \\n\\nIn particular, while simplifying assumptions are necessary, the particular assumption of a linear network represents a significant departure from the practical use cases of contrastive learning. \\n\\nWith regard to the MSE loss, while the authors use Ref [1] (which provides a neural collapse analysis of MSE) to justify their choice, please be noted that neural collapse analysis has also been conducted for CE in earlier work. \\n\\nGiven these limitations, I\\u2019d place this interesting and potentially impactful study in a marginally below borderline area. I strongly encourage the authors to improve their manuscript addressing the clarity and theoretical assumption issues above for future submissions.\", \"additional_comments_on_reviewer_discussion\": \"Concern: Theoretical assumptions are unrealistic and limited to the linear case.\", \"response\": \"Authors included pseudocode for a single training step in the revised manuscript. Reviewer nqvr acknowledged this improvement.\", \"concern\": \"Missing pseudocode for reproducibility.\"}", "{\"comment\": \"We hope our rebuttal has addressed your concerns. As we are getting close to the end of the discussion session, we would like to reach out and see if you had a chance to read our rebuttal and if there is anything else we can further clarify? We are looking forward to your response.\"}", "{\"title\": \"Thanks for the response\", \"comment\": \"Thanks for your response. The response cannot fully address my concerns. Specifically, a result held in a linear case does not necessarily have to be held in a more general case. Some discussions are needed to explain why the \\\"insight\\\" obtained from the linear model holds in a general setting. The data model in Section 5 is too far away from the practice. A more realistic setting can be considered. Addressing these comments may need substantive work.\"}", "{\"title\": \"Official Comment by Reviewer cXwx\", \"comment\": \"Thank you for the response, but my concerns remain unresolved.\\n\\nFirst, the authors claim that their motivation stems from the \\\"currently dominate the SSL landscape suffers from practical difficulties,\\\" and that \\\"DIET eliminates the reliance on pairwise similarities, thus reducing memory requirements.\\\" However, the pseudo-label-based supervised information in DIET similarly relies on approximate view invariance, which is fundamentally consistent. The equivalence of contrastive loss and MSE loss under this approximate view invariance has been widely acknowledged in the SSL community (e.g., [1-4]). Moreover, I reviewed the references cited by the authors in L37-38, but found no theoretical support for the connection between pairwise similarities and high memory usage, hoping the authors can clarify this further. \\n[1] Representation learning with contrastive predictive coding; [2] What makes for good views for contrastive learning?; [3] Learning deep representations by mutual information estimation and maximization; [4] contrastive learning can find an optimal basis for approximately view-invariant functions.\\n\\nSecond, regarding the methodology, the authors mention applying CL constraints on the projection head back to DIET, but the motivation behind this is unclear, especially as the mentioned \\\"the purpose of these additions is not to reduce memory usage\\\". If the method only aims to increase the accuracy points without considering the connection with motivation, it may not be reliable enough since the proposed issues still remain unsolved. Furthermore, the authors emphasize that the core of their algorithm is to \\\"replace complicated pairwise losses with a simple supervised loss,\\\" yet this seems to overlap with DIET and prior works such as [1-4]. \\n\\nAs I detailed in W1 & W2, my primary concern lies in the connection between the work's motivation, specific implementation, and conclusions. Also mentioned in the comments that if this can be adequately addressed, I would be happy to adjust my score, but the current rebuttal and revision may require further clarification.\"}", "{\"summary\": \"This paper presents S-DIET, a memory-efficient modification of DIET for self-supervised contrastive learning. It proves that DIET with a linear encoder and MSE loss is theoretically equivalent to spectral contrastive loss, and proposes feature normalization and projection head use to enhance performance. S-DIET significantly reduces DIET's memory requirements and achieves state-of-the-art performance without extensive hyperparameter tuning.\", \"soundness\": \"3\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"1. The paper provides comprehensive and rigorous theoretical proofs.\\n2. Addressing the high memory demand of DIET is a well-motivated objective with strong practical significance.\\n3. The experimental results presented in Table 5 demonstrate promising improvements.\\n4. The paper conducts a detailed and insightful ablation study.\", \"weaknesses\": \"1. The paper aims to address three issues: why DIET can perform comparably to CL, DIET's failure to learn all features, and its high memory demand. However, these issues are addressed in a fragmented manner without clear logical connections between them, making it difficult for readers to grasp the paper's central thesis.\\n2. The paper does not provide code or pseudocode, which hinders understanding of the proposed method and limits the ability to verify its effectiveness.\\n3. It is well-known that MSE is not typically used as a loss function for classification tasks. In the original DIET paper, each sample is treated as a separate class in a classification problem using cross-entropy loss. Why, then, is MSE employed as the loss function in Section 4? Does Theorem 4.5 hold if cross-entropy loss is used instead?\\n4. Due to the use of W1, it is unclear how Theorem 4.3 is related to the proposed method.\\n5. The theoretical analysis in the paper relies entirely on linear assumptions, while the proposed method (Equation 5) is based on empirical assumptions. These assumptions raise concerns about the rigor of this work.\", \"questions\": \"Please refer to the weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"desk_reject_comments\": \"margin violation\", \"title\": \"Submission Desk Rejected by Program Chairs\"}", "{\"comment\": \"We thank the reviewer for their response. We have provided some clarifications about the contributions of our work in the general comment [(link)](https://openreview.net/forum?id=4NgxI6Z74n&noteId=4X8V2lFySi), and will address the reviewer\\u2019s response in detail below.\\n\\nThe reviewer correctly identifies that S-DIET maintains a large classifier head but the entire classifier head does not need to be held in memory. However, unlike DIET, storing the classifier head on disk is precisely why it is no longer a significant limitation when applying our method to large data. Specifically, the dataset itself has size $N \\\\times d$ while the classifier only has size $N \\\\times m$, where the embedding dimension $m$ is much smaller than the input dimension $d$. As an example, on ImageNet-1k, which has over 1.2 million images, using an embedding dimension of 2048 results in a classifier head with a size of approximately 10GB, whereas the ImageNet dataset itself is around 150GB. Thus, **storing the classifier head only adds negligible overhead to disk storage compared to storing the dataset. In exchange, our method requires half the GPU memory of existing CL methods**, as illustrated in Table 6. Lower GPU memory requirement has strong practical significance. For example, even with batch size 256 and a large A40 GPU, CL and other SSL methods nearly run out of memory when training a ResNet-50 on ImageNet data. Meanwhile, our S-DIET can use as little as half the memory as other SSL methods. Thus, we believe S-DIET is a promising alternative for SSL in memory-limited scenarios. We hope this clarifies the design choices made in the paper.\"}", "{\"title\": \"Part 2\", \"comment\": \"3. The use of the spectral contrastive loss and MSE loss in place of the InfoNCE loss and cross-entropy loss is common practice in non-information-theoretic analysis due to the tractability of the MSE loss [1,2,3,4,5]. While the exact equivalence does not hold for the cross-entropy loss, our experimental results in Section 6 and Appendix D suggest that the equivalence approximately holds. Exploring this connection rigorously would be an interesting idea for future work. Nevertheless, we believe our existing result is valuable as it is, to our knowledge, the first precise, fully rigorous connection between contrastive learning and supervised learning.\\n\\n4. Theorem 4.3 demonstrates an equivalence between the global minima of a linear model trained with the spectral contrastive loss and one trained with DIET (namely appending a classifier head $W_H$ and training $W$ and $W_H$ to minimize the MSE loss). This suggests that the simpler DIET methodology can replace complicated CL methods, so we developed S-DIET, a modification of DIET, as a simpler, memory-efficient alternative to CL.\\n\\n5. We note that it is almost always the case in the ML community that theoretical analysis is performed in simpler settings that do not perfectly apply to practical scenarios. However, all our assumptions are commonly used in the literature, such as linear models with spectral contrastive loss or MSE loss in Section 4 [1,2,3,4,5], or the sparse coding data model in Section 5 [6,7,8,9]. Moreover, the gap between theoretical and practical scenarios does not affect the rigor of the results: our proof of the equivalence between the global minima of the spectral contrastive loss and DIET with MSE loss for linear models is fully rigorous, and our experimental results show the efficacy and memory efficiency of S-DIET in real-world settings regardless of any theoretical results. The theoretical and empirical results provide two different points of view to justify our claim that DIET can match the performance of more complicated SSL methods.\\n\\n[1]: Zhou, J., Li, X., Ding, T., You, C., Qu, Q., & Zhu, Z. On the Optimization Landscape of Neural Collapse under MSE Loss: Global Optimality with Unconstrained Features. 2022.\\n\\n[2]: HaoChen, J. Z., Wei, C., Gaidon, A., and Ma, T. Provable guarantees for self-supervised deep learning with spectral contrastive loss. 2021.\\n\\n[3]: Saunshi, N., Ash, J., Goel, S., Misra, D., Zhang, C., Arora, S., Kakade, S., and Krishnamurthy, A. Understanding contrastive learning requires incorporating inductive biases. 2022.\\n\\n[4]: HaoChen, J. Z. and Ma, T. A theoretical study of inductive biases in contrastive learning. 2022.\\n\\n[5]: Yihao Xue, Siddharth Joshi, Eric Gan, Pin-Yu Chen, and Baharan Mirzasoleiman. Which features are learnt by contrastive learning? on the role of simplicity bias in class collapse and feature suppression. 2023.\\n\\n[6] Yihao Xue, Eric Gan, Jiayi Ni, Siddharth Joshi, and Baharan Mirzasoleiman. Investigating the benefits of projection head for representation learning. 2024.\\n\\n[7]: Zixin Wen and Yuanzhi Li. Toward understanding the feature learning process of self-supervised contrastive learning. 2021.\\n\\n[8]: Zou, D., Cao, Y., Li, Y., and Gu, Q. Understanding the generalization of adam in learning neural networks with proper regularization. 2021.\\n\\n[9]: Chen, Y., Huang, W., Zhou, K., Bian, Y., Han, B., & Cheng, J. Understanding and Improving Feature Learning for Out-of-Distribution Generalization. 2023.\"}", "{\"title\": \"Part 1\", \"comment\": \"We appreciate the positive feedback from the reviewer on our rigorous theoretical results and the efficacy and significance of our proposed method S-DIET. We also thank the reviewer for providing detailed comments about our work. We hope to address each of the reviewer\\u2019s concerns below:\\n\\n1. The main thesis of the paper is that S-DIET is theoretically as powerful as CL while providing practical advantages (less memory usage). CL has been extensively studied in the literature and several additions have been proposed that are essential to obtain optimal performance. To confirm S-DIET\\u2019s comparable performance to CL, we also studied multiple factors, i.e, normalization and projection head, that are common practice for representation learning with CL, and showed their effectiveness in boosting the performance of S-DIET. Nonetheless, we acknowledge the reviewer\\u2019s remarks and will connect each part to the central thesis in the updated version.\\n\\n2. We thank the reviewer for the suggestion. We have included the following pseudocode for a single training step in the revised manuscript (page 27).\\n```\\n\\\"\\\"\\\"\\nUppercase variables stored on disk\\nLowercase variables stored in memory\", \"x\": \"train data\", \"h\": \"classifier head\", \"m\": \"first moment for classifier head\", \"v\": \"second moment for classifier head\", \"indices\": \"indices for the current batch\\n\\\"\\\"\\\"\\ndef train_step(X, H, M, V, indices, model, criterion, optimizer):\\n # Load data, head weights, and head optimizer state into memory\\n inputs, head, optimizer_m, optimizer_v = X[indices], H[indices], M[indices], V[indices]\\n labels = [0, 1, ..., len(indices)-1]\\n \\n # Forward and backward pass\\n outputs = head(model(inputs))\\n loss = criterion(outputs, labels)\\n optimizer.zero_grad()\\n loss.backward()\\n optimizer.step()\\n head, m, v = perform_multistep_adamw_head_update(head, m, v)\\n\\n # Save head weights and head optimizer state\\n # Done asynchronously\\n H[indices], M[indices], V[indices] = head, m, v\\n\\n\\ndef perform_multistep_adamw_head_update(head, m, v):\\n g = head.grad\\n\\n # first step\\n head = (1 - lr * weight_decay) * head\\n m = beta1 * m + (1 - beta1) * g\\n v = beta2 * v + (1 - beta2) * g * g\\n head = head - lr * m / (sqrt(v) + eps)\\n\\n # all other steps\\n mu = beta1 / sqrt(beta2)\\n alpha1 = (1 - lr * weight_decay) ** (t - 1)\\n alpha2 = (alpha1 * lr * mu - lr * (mu ** t)) / (1 - lr * weight_decay - mu)\\n \\n head = alpha1 * head - alpha2 * m / (sqrt(v) + eps)\\n m = (beta1 ** (t - 1)) * m\\n v = (beta2 ** (t - 1)) * v\\n\\n```\"}", "{\"revert_desk_rejection_confirmation\": \"We approve the reversion of desk-rejected submission.\", \"comment\": \"The margin difference is not noticeable without a ruler (all the style difference of this paper is due to the usage of a legacy template), and the paper will fit into 10 pages with the correct template. After more cases emerges and based on more discusses, we decided to lean on the lenient side. Please proceed with reviewing of this paper.\"}", "{\"comment\": \"We would like to highlight the main motivation and contribution of our work, which is not fully captured in all the reviews. Contrastive Learning (CL) is one of the most popular and successful methods for representation learning. However, CL requires a large encoder and crucially relies on a large batch size to learn high-quality representation. Therefore, CL methods require a considerably large GPU memory. For example, the well-known SimCLR method trained their best model with CloudTPUs, using 128 cores and a batch size of 8192 to train encoders 4x larger than ResNet50 (Section 2.2 in [1])! In our work, we investigated an alternative \\u201csupervised\\u201d approach for representation learning (S-DIET), with considerably lower memory requirements. We theoretically proved the promise of S-DIET to learn high-quality representations, and empirically confirmed its effectiveness:\\n1. We proved a precise equivalence between solutions of S-DIET and spectralCL, which confirms that supervised approaches can learn high-quality representations. To our knowledge, this is the first fully rigorous connection between supervised and contrastive learning.\\n2. We showed that S-DIET can be implemented with much lower memory requirements, compared to CL.\\n3. To further boost the S-DIET performance, we proved in a simple theoretical example that normalizing embeddings enables S-DIET to learn noisier features in addition to the less-noisy ones. \\n4. We empirically showed that S-DIET matches the performance of state-of-the-art CL methods while using substantially less memory than all the other methods.\\nWe believe our contributions show the promise of simpler and more efficient approaches for representation learning.\\n\\n[1] Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations, 2020.\"}", "{\"summary\": \"This paper studies DIET, which is a method that has rather few adoptions. This paper claims that DIET uses fewer memory than common contrastive learning approaches, and proved some theoretical results showing that DIET and spectral contrastive learning share the same solutions. Moreover, this paper proposes a new alternative S-DIET to further improve its performance.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": \"1. This paper has theoretical claims that connects DIET to CL.\\n2. This paper has some empirical evidence that the proposed S-DIET can match the performance on benchmarks like CIFAR and ImageNet-100.\", \"weaknesses\": \"The method DIET studied in this paper has a fatal limitation, which is that the labels are essentially the sample index. As the dataset size increases, the classification head will need to increase linearly as well, which makes it impractical to use in large scale dataset training. Even though the proposed S-DIET does not require the classification head to be always loaded into the memory, it is still unnecessary to store such a large head, especially when one is training on millions of data.\", \"questions\": \"None\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This study provides a theoretical analysis of DIET, a recently proposed supervised approach for self-supervised learning. DIET, as a menthod of CL, labels each example by its datum index and employs a supervised loss for training. This work obtains several conclusions, including (i) for linear encoders, DIET with MSE loss is equivalent to spectral contrastive loss; (ii) DIET tends to learn features with less noise but may not capture all relevant aspects of the training data; (iii) feature normalization can help mitigate this issue, while incorporating a projection head can further enhance performance. This work further introduces SCALED-DIET (S-DIET) to improve the model's linear probe accuracy.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"This paper explore the limitation of DIET, an important method of CL, and obtain several conclusions: (i) for linear encoders, DIET with MSE loss is equivalent to spectral contrastive loss; (ii) DIET tends to learn features with less noise but may not capture all relevant aspects of the training data; (iii) feature normalization can help mitigate this issue, while incorporating a projection head can further enhance performance.\", \"This work further introduces SCALED-DIET (S-DIET) to improve the model's linear probe accuracy, i.e., use batch cross entropy and the multistep update formula for AdamW.\", \"Some experiments demonstrate the effectiveness of the proposed S-DIET.\"], \"weaknesses\": [\"The motivation behind this paper is unclear. According to P2 in the Introduction, DIET's advantage lies in its ability to mitigate CL's reliance on large datasets, while requiring a smaller parameter dimension to balance with sample size. The claim that smaller encoder dimensions are a key drawback for handling large data is not well justified. Furthermore, the authors state, \\\"not clear whether DIET can capture the pairwise similarities between views...SSL,\\\" yet DIET, as a CL algorithm utilizing supervised loss, does not explicitly depend on pairwise similarities; this is merely an implementation choice rather than a fundamental mechanism. I fail to see how this motivation strongly connects DIET with contrastive loss, is pairwise similarity closely related to memory? While exploring CL from an efficiency perspective do be valuable, a thorough reading of the paper reveals a lack of such information. Perhaps I overlooked some details, and I hope the authors can clarify their insights.\", \"The key conclusions in this work need further explanation in relation to the core ides, i.e., MEMORY-EFFICIENT. For instance, the relationships between memory and features, encoder parameter dimensions, and projection heads are not adequately described, leading to fragmented conclusions.\", \"The choice to study DIET is justified by its independence from large training data, but there are other CL algorithms that also do not rely on large datasets, such as few-shot SSL. Additionally, DIET requires labeled information. Can the analyses in this work be applied to these other methods? If so, what differentiates this work? A broader exploration of algorithms and their mechanisms might enhance the reliability of this study.\", \"A code that can reflect the idea of \\u200b\\u200balgorithm implementation is encouraged, since it is currently only described in 6 lines. At the same time, the introduction of these modules will increase the computational overhead, and related experiments are also necessary, after all, the focus is on memory.\", \"(Minor) The paper's template appears to differ from the one provided on the official website, such as in the line numbering. Please consider making further corrections.\", \"**I would be happy to reconsider my score if these concerns can be addressed.**\"], \"questions\": \"Please see **Weaknesses**.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
4NWtrQciRH
Evidential Learning-based Certainty Estimation for Robust Dense Feature Matching
[ "Lile Cai", "Chuan-Sheng Foo", "Xun Xu", "ZAIWANG GU", "Jun Cheng", "Xulei Yang" ]
Dense feature matching methods aim to estimate a dense correspondence field between images. Inaccurate correspondence can occur due to the presence of unmatchable region, necessitating the need for certainty measurement. This is typically addressed by training a binary classifier to decide whether each predicted correspondence is reliable. However, deep neural network-based classifiers can be vulnerable to image corruptions or perturbations, making it difficult to obtain reliable matching pairs in corrupted scenario. In this work, we propose an evidential deep learning framework to enhance the robustness of dense matching against corruptions. We modify the certainty prediction branch in dense matching models to generate appropriate belief masses and compute the certainty score by taking expectation over the resulting Dirichlet distribution. We evaluate our method on a wide range of benchmarks and show that our method leads to improved robustness against common corruptions and adversarial attacks, achieving up to 10.1\% improvement under severe corruptions.
[ "Evidential Deep Learning", "Dense Feature Matching", "Pose Estimation" ]
Accept (Poster)
https://openreview.net/pdf?id=4NWtrQciRH
https://openreview.net/forum?id=4NWtrQciRH
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yi6iuLKiWi", "y1eNLlH4hD", "pKpWLW6eCx", "kZT2s5Nin8", "iDtgETTDSj", "XX40S0HUu1", "WGtAZlR6Ge", "TFSUULIABO", "SqwZZisazl", "QWcEeueH3x", "NqfgMVQ88M", "N5bAU2JShB", "J0bMgLXC9v", "Ht1DWG75eK", "EcyPwQyE9i", "BY2YP0ajAv", "ApxpvhxwAT", "9TT0FWXotm", "9SsBmFhJo7", "31s9bpmP62", "1bOWGDTPaI", "1V9neJ6cyc" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_review", "official_comment", "decision", "official_comment", "official_review" ], "note_created": [ 1732786129472, 1732783784295, 1732785407426, 1733304071371, 1733146948088, 1729993864951, 1732786837387, 1733061518293, 1730597281674, 1733110062234, 1733165384974, 1733146354888, 1733193147927, 1733132013315, 1733194504907, 1733125328837, 1734813321498, 1730381678232, 1732783135138, 1737523735574, 1733304309777, 1730327231165 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission5951/Authors" ], [ "ICLR.cc/2025/Conference/Submission5951/Authors" ], [ "ICLR.cc/2025/Conference/Submission5951/Authors" ], [ "ICLR.cc/2025/Conference/Submission5951/Authors" ], [ "ICLR.cc/2025/Conference/Submission5951/Reviewer_oVDg" ], [ "ICLR.cc/2025/Conference/Submission5951/Reviewer_Uwz6" ], [ "ICLR.cc/2025/Conference/Submission5951/Authors" ], [ "ICLR.cc/2025/Conference/Submission5951/Reviewer_3kpK" ], [ "ICLR.cc/2025/Conference/Submission5951/Reviewer_vn4d" ], [ "ICLR.cc/2025/Conference/Submission5951/Authors" ], [ "ICLR.cc/2025/Conference/Submission5951/Reviewer_vn4d" ], [ "ICLR.cc/2025/Conference/Submission5951/Authors" ], [ "ICLR.cc/2025/Conference/Submission5951/Authors" ], [ "ICLR.cc/2025/Conference/Submission5951/Reviewer_oVDg" ], [ "ICLR.cc/2025/Conference/Submission5951/Reviewer_vn4d" ], [ "ICLR.cc/2025/Conference/Submission5951/Reviewer_Uwz6" ], [ "ICLR.cc/2025/Conference/Submission5951/Area_Chair_6zc6" ], [ "ICLR.cc/2025/Conference/Submission5951/Reviewer_3kpK" ], [ "ICLR.cc/2025/Conference/Submission5951/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission5951/Authors" ], [ "ICLR.cc/2025/Conference/Submission5951/Reviewer_oVDg" ] ], "structured_content_str": [ "{\"comment\": \"* W1. Why EDL is still effective for binary classification task\\n\\nAs mentioned by the reviewer in the comments, EDL's main advantage is to detect out-of-distribution (OOD) samples or mining pseudo-unknown objects. This advantage comes from the fact that by modeling second-order probabilities and uncertainty, EDL provides improved uncertainty estimation. As shown in the seminal work [Sensoy et al., NeurIPS2018], EDL will produce larger uncertainty for OOD samples, while the standard approach (training the network by minimizing cross-entropy loss) tends to produce over-confident (low-uncertainty) predictions. Assuming a binary classification task and an OOD sample with ground truth label [1, 0], the standard approach may produce an over-confident prediction [0.01, 0.99], while EDL may produce a high-uncertainty prediction [0.4, 0.6]. Now assume that the system needs to continue the downstream estimation pipeline with the over-confident or high-uncertainty prediction, with which prediction shall the system perform better? Intuitively, an over-confident prediction is more devastating as the system cannot recover from it, while a high-uncertainty prediction still assigns some confidence to the correct class and the system may still be able to perform well with it. This is exactly what happens when we apply EDL for the certainty estimation task in dense matching. In Fig.6 of the paper, we provide visualization of the certainty map estimated by RoMa and our method. RoMa tends to predict very low certainty value (e.g., below 0.05) for matchable regions on corrupted images (i.e., over-confidently classify the region as unmatchable), while our model produce a certainty value around 0.4 for matchable regions -- it is not perfect, but good enough to facilitate the following balanced sampling step to sample a diverse set of matches from matchable regions. We have modified the Introduction section and Section 4.5 to illustrate this point better.\\n\\n\\n* W2. The overall contribution is limited, lacking enough in-depth discussion\\n\\nWe added Fig.7 in the revised paper to cast more insight into the behavior of EDL in dense feature matching. We visualize the evidence map estimated by our model, and reveal that under corrupted cases, the model cannot produce high evidence for both classes in the matchable region. This cause high-uncertainty estimation for the matchable region. Compared to the over-confident prediction generated by RoMa, the EDL's prediction enables more effective sampling of reliable matches and thus achieves better performance.\\n\\n* W3. The introduction of EDL in Section 3.2 is insufficient\\n\\nWe have revised Section 3.2 to add more details in introducing the method.\\n\\n* W4. Missing discussion of visual localization (InLoc and AachenDay-Night) and homography estimation (HPatches)\\n\\nWe added additional benchmarking datasets in A.3 of the revised paper. Our method obtains 1.7% increase in mAA@10px than RoMa on WxBS, and 0.5% increase in AUC$@3$px on HPatches. For visual localization (InLoc and AachenDay-Night), RoMa does not release their evaluation code. We need more time to reproduce their results and evaluate our method, and thus do not report the results in current version of the paper. \\n\\n* Q1. Difference or relationship of the proposed method compared to outlier filtering post-processing method like RANSAC\\n\\nThe outliers in RANSAC refer to points that do not fit into the current estimation of the geometry model (e.g., homography, essential matrix). The certainty estimation in our task does not involve a geometry model. Instead, the matches sampled based on the estimated certainty will be used in downstream geometry estimation tasks, where RANSAC will be applied.\"}", "{\"comment\": \"* W1. Ablation study on coarse-scale and fine-scale EDL\\n\\nWe added an ablation study on applying EDL to different scales in Table 5 of the revised paper. We observe that applying EDL on the coarse scale alone is not effective, as the final prediction comes from the finest scale (which is still learnt by the BCE loss). Applying EDL on fine scales improve the performance on corrupted samples significantly. The best performance is achieved when EDL is applied on both coarse and fine scales.\\n\\n* W2. Experiments on IMC2022 and WxBS\\n\\nWe added additional benchmarking results in A.3 of the revised paper. Compared to RoMa, our method obtains 1.7% increase in mAA@10px on WxBS, and 0.5% increase in AUC@3px on HPatches. For IMC2022, RoMa does not release their evaluation code. We need more time to reproduce their results and evaluate our method, and thus do not report the results in current version of the paper. \\n\\n* W3. Computational cost of our method\\n\\nWe report the training and inference time of RoMa and our method in Table 6 of the revision . For training, our method takes 1.4% more GPU hours than RoMa (128.4 vs. 126.6 GPU hours). For inference, our method actually incurs marginally lower cost (327 vs. 329 ms). This is probably due to the fact that EDL uses simple operations like addition and division to obtain the final probability, eliminating the more complicated sigmoid computation in RoMa.\"}", "{\"comment\": \"* W1. Only a single model is tested\\n\\nWe verify our method on another dense matcher, DKM [Edstedt et al., 2023] in A.4 of the revised paper. Our method increases the clean AUC$@5 by 1.6% on MegaDepth-1500, and increases the mean corruption AUC@5 by 1.3% on MegaDepth-1500-C. These results demonstrate the generalizability of our method beyond RoMa.\\n\\n* W2. Explanation of EDL is difficult to understand \\n\\nWe have revised Section 3.2 to make it more clear.\\n\\n* W3. EDL's built-in uncertainty measure is not used and apply EDL on the regression-by-classification in the coarse matching step\\n\\nIn our current approach, EDL's built-in uncertainty measure is not used, and instead its expected value over the first class is used to indicate matching reliability. Actually in the seminal work [Sensoy et al., NeurIPS2018], EDL's built-in uncertainty measure is also not used: for fair comparison with other methods, the authors choose to use the entropy computed from the expected class probabilities and demonstrate improved uncertainty measurement for out-of-distribution detection. This suggests that the expected class probability of EDL itself is informative and can be directly used for following tasks.\\n\\nIt is possible to apply EDL for the regression-by-classification in the coarse matching step, but there are two drawbacks for this approach. First, the uncertainty at coarse scale does not necessarily represent the uncertainty at the finest scale. In the current method, only the matches and certainty score at the finest scale are used for balanced sampling and downstream tasks. As shown in the ablation study on coarse vs. fine scale EDL in the revised paper (Table 5), it is the EDL at fine scale that contributes most to the improvement, and coarse-scale EDL is not effective. Second, while RoMa use a regression-by-classification formulation in the coarse matching step, other dense matchers, e.g., DKM [Edstedt et al., 2023] and DGC-Net [Melekhov1 et al., 2019], use L1 or L2 regression for coarse matching. Nevertheless, all these dense matchers use the same classification formulation for certainty estimation. Our current formulation of EDL for certainty estimation thus avoids model-specific design and can be easily applied to other dense matchers.\\n\\n* Q1. How much does the threshold used for balanced sampling matter for the robustness?\\n\\nWe observe that inappropriate choice of the threshold can adversely affect both RoMa and our method. For example, increasing the threshold from 0.05 to 0.1 degrades the performance of our method by 2.6%, and RoMa by 1.7% on MegaDepth-1500 corrupted by Gaussian noise at severity level 5. A sampling technique without such hardcoded threshold would be an interesting future work. \\n\\n* Q2. Is the new certainty estimation more or less as heavy as the old in terms of inference and training speed?\\n\\nYes. Your reading is correct. We report the training and inference time of RoMa and our method in Table 6 of the revised paper. For training, our method takes 1.4% more GPU hours than RoMa. For inference, our method actually incur marginally lower cost than RoMa due to the elimination of sigmoid computation for obtaining final probability.\\n\\n* Q3. Is the increased performance due to only better certainties? Is the performance the same if certainty scores from the new model are combined with matches from the original RoMa?\\n\\nWe added in A.5 of the revised paper the results of combining certainty score from our model with the warp from the original RoMa model. We observe that using our certainty score, the performance of RoMa is increased from 25.3% to 36.5%, close to our results of 37.9%. In Section 4.5 of the paper, we report the average endpoint error (AEPE), which is defined as the average Euclidean distance between the estimated and ground truth warp. The AEPE for our method and RoMa on MegaDepth-1500-C is 6.12 and 6.07, respectively. The comparable AEPE values suggest that the warp prediction branch of our method performs similarly to RoMa. These two studies combined suggest that it is mainly the better certainty estimation that brings the improvement in performance.\\n\\n* Q4. Is there a way to make use of the built-in uncertainty in the Dirichlet distribution?\\n\\nOne way to utilize the built-in uncertainty may be to use it as a weighting on top of the predicted reliability score, so that matches with high reliability score and low uncertainty score are more likely to be sampled in the following balanced sampling step. However, our preliminary study shows that this weighting strategy does not produce better results. How the built-in uncertainty in EDL can be used to improve feature matching performance is an interesting and relevant problem and we would like to explore it in the future.\"}", "{\"comment\": \"Dear Reviewer vn4d,\\n\\nThanks for your feedback. We would like to clarify that in our experiment, the attack is added to the ImageNet std-mean normalized image in [-2.1, 2.6], not the original image in [0, 1] (that is why we answer our valid image space is [-2.1, 2.6]. We implement it in this way as it is the ImageNet std-mean normalized image that is feed into the forward and loss function in our code). Based on your comments, a reasonable choice of epsilon values would be [1/255, 2/255, 4/255, 8/255, 12/255] in the [0, 1] image space. When we apply it to our image space, the epsilon values need to be scaled by the reciprocal of ImageNet std, i.e., 1/0.225$\\\\approx$4.44 (the ImageNet std is [0.229, 0.224, 0.225]. Strictly speaking, the epsilon value needs to be scaled differently for the three channels. Here we just take the median value 0.225 for simplicity since the values are very close). The equivalent epsilon values in our image space would be: [1/255, 2/255, 4/255, 8/255, 12/255]*4.44=[0.01741176, 0.03482353, 0.06964706, 0.13929412, 0.20894118]. Therefore, our original comparison at epsilon 0.1 and 0.2 should still be meaningful, where noticeable gain of our method over RoMa can be observed, and thus support our claim that our method is more robust than RoMa.\\n\\nWe managed to run the experiment on the new set of epsilon values for our method and RoMa. The results are reported below. Here we directly use the equivalent attack value in the [0, 1] space for brevity (while experiments are still done using the actual epsilon values in our image space).\\n\\nAUC@5 on MegaDepth-1500 under FGSM attack:\\n|epsilon | 1/255 | 2/255 | 4/255 | 8/255 | 12/255 |\\n|---------|---|---|---|---|----|\\n|RoMa | 54.1 | 50.3 |44.9 |38.9 |34.5 |\\n|Ours| 55.2| 51.1|46.7 |40.9 |37.4 |\\n\\nAUC@5 on MegaDepth-1500 under PGD attack:\\n|epsilon | 1/255 | 2/255 | 4/255 | 8/255 | 12/255 |\\n|---------|---|---|---|---|----|\\n|RoMa | 54.3 | 48.5 | 34.7 | 19.1 | 14.8 |\\n|Ours|55.4 |49.7 |37.2 | 22.5| 16.7|\\n\\nWe observe consistent and up to 2.9% gain for FGSM and 3.4% for PGD. We believe such results demonstrate the improved robustness of our method over RoMa. We will update the full set of results in our revision. We hope this helps to address your concern.\"}", "{\"comment\": \"I understand that 0.05 is used in prior work. However, it is not chosen for optimal performance on corrupted images. The presented method has an advantage in that it seems to work for uncorrupted and corrupted images with the same threshold, but it is still relevant to know if RoMa can be made to work on corrupted images simply by changing the score threshold.\\n\\n> A sampling strategy that avoids such hardcoded value should be a better solution.\\n\\nI agree, but my understanding is that the proposed method still needs a hardcoded balanced sampling threshold (the same value as in RoMa). Is this understanding incorrect?\"}", "{\"summary\": \"This paper proposes to unify evidential deep learning (EDL) and dense feature matching, achieving more robust matching results, especially for corrupted image pairs. The authors propose MegaDepth-1500-C and ScanNet-1500-C benchmarks to evaluate the robustness of the proposed method under common image corruptions. The proposed method enjoys superior results in both clean and corrupted data.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The incorporation of EDL to dense feature matching is interesting, and has not been investigated before.\\n2. The proposed method enjoys good performance in corrupted data.\", \"weaknesses\": \"1. Although the point of EDL is interesting, the usage of EDL for certainty estimation in dense feature matching is still questionable. From the introduction in Section 2.3, I think EDL's main advance is to detect out-of-distribution samples or mining pseudo-unknown objects. However, the certainty estimation in feature matching is just a binary classification task (matched or not matched). Why is EDL still effective? The authors did not provide a more insightful discussion about this key question.\\n\\n2. The overall contribution is limited. Because of lacking enough in-depth discussion about EDL and certainty estimation in feature matching, makes this work appear more as a mere combination of these two approaches rather than a convincing exploration.\\n\\n3. The introduction of EDL in Section 3.2 is insufficient, missing the necessary background/preliminary in related works.\\n\\n4. Experiments are not sufficient, missing discussion of visual localization (InLoc and AachenDay-Night) and homography estimation (HPatches). The proposed method achieves significant improvements in corrupted data, while the improvements based on clean data are limited. As a general certainty estimation, the usage of EDL should consistently improve the matching accuracy in all scenarios.\", \"questions\": \"If EDL is mainly used for certainty estimation, what are the differences or relationships of the proposed method compared to outlier filtering post-processing in feature matching (RANSAC)?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We would like to thank all the reviewers for their insightful comments. We have uploaded a revised version of our paper. Specifically, the following changes have been made to address reviewers' concerns:\\n1. Modified the Introduction section to better motivate the use of evidential deep learning (EDL) in certainty estimation for dense feature matching\\n2. Modified Section 3.2 to provide more details in introducing EDL.\\n3. Added Fig.7 in Section 4.5 to provide more insight into why employing EDL improves the performance.\\n4. Added an ablation study on applying EDL to different scales in Section 4.5\\n5. Added a table for computational cost comparison in Section 4.5\\n6. Added A.1 to provide implementation details in model architecture, datasets and training procedure\\n7. Added A.2 to provide benchmarking results on 3D Common Corruptions and CosPGD\\n8. Added A.3 to provide results on additional benchmarking datasets\\n9. Added A.4 to verify our method with other dense feature matching method\\n10. Added A.5 to provide results on combining certainty score of our method and warp estimation of RoMa\\n\\nAll the changes have been highlighted in blue in the revision. Responses to specific comments are provided under each reviewer's comment.\"}", "{\"comment\": \"Thanks for the response. The authors presented results from the ablation study, experiments on the mentioned datasets as well as details on training and inference times. I think my concerns have been addressed and I will retain my positive rating.\"}", "{\"summary\": \"This paper presents an interesting idea of evidential learning for certainty estimation for the dense pixel-level feature matching task.\\nThe proposed method is supposedly more OOD and adversarial robust than the current SotA RoMa. \\nIt is tested against 2D Common Corruptions variants of 2 commonly used datasets for this task i.e. MegaDepth-1500 and ScanNet-1500.\\nIt is also tested against outdated adversarial attacks such as FGSM and PGD.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"If the idea of using evidential learning for feature matching is truly novel then that makes the work quite interesting and significant.\\nApart from a couple of small typos, the paper is very well written. \\nThe structure of the paper and the intended story are easy to follow.\\nThe abstract of the paper is well written and to the point.\", \"weaknesses\": \"W1- **A lot of implementation details are missing from the paper.**\\nSimply mentioning that it is built on top of RoMa is insufficient information.\\nIt is understandable to do so for the main paper to save space; however, the supplementary material should be used to provide such information, for example, the exact architecture, the training procedure, details about the datasets, HPC resources used, and other details important for reproducibility. \\n\\nW2- **Needs a stronger argument for why OOD and Adversarial Robustness is important.**\\nThe argument made in the introduction to explain why OOD and Adversarial robustness are important for this task can be made significantly stronger. Unfortunately, a case has not been made for why this is interesting and important for the community. \\n\\nW3- **Out-dated evaluations for robustness.**\\nIf the argument for OOD and Adversarial robustness is readiness for the real world, then the evaluations used do not hold up to the argument. Since the 3D Common Corruptions [1] are more real-world common corruptions than the 2D Common Corruptions used in the paper. Additionally, FGSM and PGD attacks were used for evaluating adversarial robustness, however [2] showed in their work that these attacks, originally proposed image classification are inadequate for pixel-wise prediction tasks, such as the one used in this proposed work. This is because FGSM and PGD optimize the attack by increasing the aggregate loss and not the per-pixel loss, this can cause the attack to be highly localized making a non-robust method appear very robust as the mean performance would still be quite well over the rest of the image space. Thus, specialized pixel-wise attacks such as CosPGD are essential for truly evaluating the adversarial robustness of pixel-wise prediction tasks. \\n\\nW4- **Using 2D Common Corruptions on other known datasets is not always a novel contribution.**\\nIt is unclear if the contribution of the 2 supposed OOD Robustness evaluation datasets MegaDepth-1500-C and ScanNet-1500-C is merely using 2D Common Corruptions proposed for ImageNet-1k and CIFAR datasets but changing their resolutions and applying them to the respective iid datasets or if there is more to the story, for example, some unforeseen complications that needed to be handled? If not, then simply applying these corruptions to other datasets is not exactly a novel contribution, it is still an interesting study just not a \\\"new contribution\\\" as claimed in the bullet points in the introduction of the paper.\\n\\nW5- **Almost Redundant Presentation of Results.**\\nIncluding both Table 1 and Figure 3 is redundant. I understand that Table 1 contains the mean values over the 5 severity levels while Figure 3 shows the values at each severity, however by using straight dashed lines of respective colors, with y = mean value for all x values the need for Table 1 is eliminated.\\n\\n\\n\\n\\n**References**\\n\\n[1] Kar, O\\u011fuzhan Fatih, et al. \\\"3d common corruptions and data augmentation.\\\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022.\\n\\n[2] Agnihotri, Shashank, Steffen Jung, and Margret Keuper. \\\"CosPGD: an efficient white-box adversarial attack for pixel-wise prediction tasks.\\\" Forty-first International Conference on Machine Learning. 2024.\", \"questions\": \"Following are the questions for which I would highly appreciate an answer, these questions have not impacted my current recommendation for this paper, however, the response might have a significant impact on my final recommendations.\\n\\nQ1- **Unclear evaluation details for adversarial attacks used.** \\nThe epsilon values used for attack are starting from 0.1, 0.2 going up to 1., here the attacks l-infinity norm bounded? If yes, then what is the valid image space? Is it [0, 1] or is it [0, 255], meaning when epsilon = 1, does this mean that the epsilon is actually 1/255 (meaning that the valid image space is [0, 255]), or is the value of epsilon actually 1, meaning the entire image is nothing but adversarial noise? In this case, the image would also look semantically different to the human eye meaning that it will no longer be a valid adversarial attack.\\nAnd if the epsilon value is in fact 1/255, then the drop in performance is too significant for a very small epsilon value indicating the method is not truly robust to adversarial attacks. Could you also please comment on this?\\n\\nQ2- **The idea of using Evidential Learning for Pixel-Matching is not entirely novel.** \\nWhile the exact downstream task in [3] is different from the one explored by this proposed work, the core ideas for both seem unusually very similar, the key difference being the distributions used, while [3] used a Normal Inverse-Gamma (NIG) distribution, this work uses a Dirichlet distribution. Would you please further highlight the key differences between the two other than some task-related implementation details?\\n\\n\\n**References**\\n\\n[3] Chen Wang, Xiang Wang, Jiawei Zhang, Liang Zhang, Xiao Bai, Xin Ning, Jun Zhou, Edwin Hancock,\\nUncertainty estimation for stereo matching based on evidential deep learning,\\nPattern Recognition, Volume 124, 2022,108498, ISSN 0031-3203, https://doi.org/10.1016/j.patcog.2021.108498. (https://www.sciencedirect.com/science/article/pii/S0031320321006749)\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear reviewers,\\n\\nAs today is the last day you can post a message to us, may we check if our rebuttal has addressed your concerns? Is there any clarification needed? We appreciate you taking the time to read our revision and response. Please do not hesitate to let us know if you have any further concerns. We will try our best to address them.\"}", "{\"comment\": \"Dear Authors,\\n\\nThank you very much for your response and the changes to the submission. \\n\\nMost of my concerns in the original review have now been answered, however, unfortunately, one of the answers to my question raises a very big concern for me. \\n\\nThe epsilon values used for attack evaluations are in {0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0} when the valid unnormalized image space is [0, 1]. To the best of my knowledge, this is highly unusual and does not align with previous works on adversarial attacks.\\n\\nAs rightly mentioned in the response \\\"adversarial attacks are desired to be imperceptible\\\". Therefore, when in an image space of [0, 1], and when $\\\\ell_{\\\\infty}$-norm bounded, the usually used epsilon values are {1/255, 2/255, 4/255, 8/255} i.e. $\\\\approx$ {0.00392156862, 0.00784313725, 0.0156862745, 0.03137254901}. However, the lowest epsilon value considered here is already '0.1'. This is a very very high epsilon value! Adversarial attack evaluations and comparisons at such high epsilon values do not really signify much, and thus the evaluations need to be corrected. \\n\\nI would have liked to give this feedback significantly earlier in the discussion phase when revisions were possible, however, (understandably) this question has been answered only recently. \\n\\nIn light of the existence of this major concern in its current form, I recommend rejecting the paper allowing for this concern to be addressed. However, I am open to further clarifications and discussions.\\n\\nBest Regards\\n\\nReviewer vn4d\"}", "{\"comment\": \"We would like to thank the reviewer for reading our rebuttal. For the follow-up question regarding the score threshold, we would like to clarify that 0.05 is the value used by DKM and RoMa in their paper and code. We do not tune this parameter but just follow their setting for fair comparison. We believe the authors of DKM and RoMa have carefully chosen this value based on their extensive benchmarking experiments. Reducing the threshold to 0.025 may work for MegaDepth-1500, but may degrade on other benchmarks. A sampling strategy that avoids such hardcoded value should be a better solution.\"}", "{\"comment\": \"Dear reviewer vn4d,\\n\\nFor the adversarial attack experiment, we follow the setup in the seminal work for evidential deep learning [Sensoy et al., NeurIPS2018] (https://proceedings.neurips.cc/paper/2018/file/a981f2b708044d6fb4a71a1463242520-Paper.pdf). In Fig.4 of the paper, the FGSM attack is implemented with epsilon value from 0.1 to 1 on the MNISTdataset, in Fig.5 of the paper, the FGSM attack is implemented with epsilon value from 0.05 to 0.4 on CIFAR5 dataset. They do not mention the valid image space in the paper. We check their official tensorflow implementation at https://muratsensoy.github.io/uncertainty.html, in [322] , the line \\\"nimg = np.clip(a=nimg,a_min=0,a_max=1)\\\" indicates they use valid image space [0,1] in the experiment. Therefore, it seems to us it is common practice in the evidential learning literature to use large epsilon value to attack the model.\\n\\nPreviously we have experimented with small epsilon value 0.001 and 0.01 for FGSM and PGD attack (we do not plot the results in the paper as the label will be cluttered in x-axis.) We provide the results here in the table below:\\n\\nAUC@5 on MegaDepth-1500 under FGSM attack:\\nepsilon | 0.001 | 0.01 |\\n---------|-------|-------|\\nRoMa | 60.2 | 56.6 |\\nOurs |61.7| 57.9|\\n\\nAUC@5 on MegaDepth-1500 under PGD attack:\\nepsilon | 0.001 | 0.01 |\\n---------|-------|-------|\\nRoMa | 53.2 | 49.8 |\\nOurs |53.9| 50.5|\\n\\nWe observe clear advantage of our method over RoMa under small epsilon values as well. Actually, given our method outperforms other methods consistently in large epsilon values from 0.1 to 1, we do not see why the trend should be reversed for small epsilon values.\"}", "{\"comment\": \"I thank the authors for the well-written rebuttal.\\n\\nI have one follow-up question regarding the comment \\n> We observe that inappropriate choice of the threshold can adversely affect both RoMa and our method. For example, increasing the threshold from 0.05 to 0.1 degrades the performance of our method by 2.6%, and RoMa by 1.7% on MegaDepth-1500 corrupted by Gaussian noise at severity level 5. A sampling technique without such hardcoded threshold would be an interesting future work. \\n\\nThis sounds like reducing the threshold to (say) 0.025 could be a good idea. Or is 0.05 perfectly tuned for image corruptions?\"}", "{\"comment\": \"Dear Authors,\\n\\nThank you for the prompt response. \\n\\nFirstly, to address \\\"Actually, given our method outperforms other methods consistently in large epsilon values from 0.1 to 1, we do not see why the trend should be reversed for small epsilon values.\\\" : Currently, the evaluations begin from a very high epsilon value of 0.1, and the gap between RoMa and the proposed method is almost negligible until epsilon=0.1, compared to performances of other methods under adversarial attacks. Evaluations using epsilon 0.1 and higher are almost meaningless, as at this point, it is not an adversarial attack anymore, the permissible perturbation budget is called epsilon since it is a minimal value, hence the use of the word and variable epsilon. However, the perturbation budget of 0.1 is not small. \\nMoreover, the claim in this paper is that the proposed method is \\\"more robust\\\" than RoMa. Had the claim been \\\"as robust as RomA\\\" the current evaluations would have been acceptable. However, if the claim of \\\"more robust\\\" is to be proved, meaningful comparisons need to be made. I will explain \\\"meaningful comparisons\\\" in my next point.\\n\\nSecondly, I see that the NeurIPS paper you cited also follows a similar regime. Unfortunately, that paper did not have reviewers to point this out; fortunately, this paper does. Adversarial attacks are a method to test the reliability of all Deep Learning based methods, use of these attacks is not limited to evidential learning-based methods. \\nStarting from the FGSM paper to PGD, APGD, AutoAttack, SegPGD, and CosPGD, all well-known $\\\\ell_{infinity}$-norm bounded white-box adversarial attacks use $\\\\epsilon \\\\in [\\\\frac{1}{255}, \\\\frac{12}{255}]$ and never more, as beyond this the value it is not really $\\\\epsilon$ anymore, the perturbations are too large and thus they are not an admissible adversarial attack as they are not meaningful attacks anymore.\\n\\nI hope I have been able to put across this concern. I would strongly recommend fixing the adversarial attack evaluations.\\n\\nBest Regards\\n\\nReviewer vn4d\"}", "{\"title\": \"Thanks for the rebuttal\", \"comment\": \"Thanks for the response. The rebuttal addressed most of my concerns, so I raised my score to 5. The remaining concern is that the novelty of this work is somehow incremental, i.e., incorporating EDL to dense feature matching intuitively. Moreover, the overall improvement of this work is mainly in corrupted images, which is not significant enough for the foundational feature-matching task.\"}", "{\"metareview\": \"This submission proposes a method to increase the corruption robustness of dense feature matching. To this end, the paper proposes to modify the model's certainty prediction to predict the parameters of a Dirichlet distribution over probabilities of a correspondence being reliable or unreliable.\\nThe proposed approach is simple and effective, the paper is well written and the method has been evaluated and proven to be beneficial on 2 commonly used datasets, MegaDepth-1500 and ScanNet-1500, with common corruptions and FGSM and PGD adversarial attacks. While two out of four reviewers give final scores of 5, they agree on the merit of the submission in terms of improving the robustness.\", \"additional_comments_on_reviewer_discussion\": \"Two of the reviewers rate the paper with a score of 5 even after the rebuttal. Reviewer vn4d initially had several concerns that were addressed during the rebuttal. The last update in numbers by the authors in order to improve the evaluation using adversarial attacks could address the remaining concerns. The AC strongly encourages the authors to transfer these results also to the paper. Reviewer Uwz6 has raised the score from initially 3 to 5 during the rebuttal, pointing out that the improved corruption robustness of feature matching by the approach is not translated into an improvement on clean data. While this is true, I agree with the other reviewers that the improved robustness is in itself a valuable contribution.\"}", "{\"summary\": \"This paper applies evidential deep learning to feature matching tasks, introducing an evidential learning framework for certainty estimation in dense feature matching problems. The proposed method enhances robustness in dense matching against corruptions and adversarial attacks, with extensive experiments conducted and visualization presented to demonstrate its performance.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"1. The paper is well-motivated, and the main idea is clearly explained.\\n\\n2. Experiments are conducted across a wide range of benchmarks with various types of corruptions and adversarial attacks. The proposed method outperforms in most cases.\\n\\n3. The paper includes visualizations to analyze why the proposed method performs better than comparison method, particularly on corrupted data across different datasets.\", \"weaknesses\": \"Several questions need to be addressed:\\n\\n1. This work employs a two-dimensional evidential deep learning (EDL) framework to certainty estimation in both coarse-scale and fine-scale losses. What would happen if EDL were applied to only one of these loss scales? Conducting an ablation study could provide insights into the effectiveness of EDL at each scale. It would be great if the authors could report performance results by applying EDL exclusively to coarse-scale or fine-scale losses, compared to using it on both losses. \\n\\n2. Experiments are conducted on two datasets, MegaDepth-1500 and ScanNet-1500. There are other datasets mentioned in RoMa paper such as the street-view IMC 2022 and the challenging WxBS Benchmark. Evaluating the proposed method on those different datasets could further demonstrate its generalizability across diverse scenarios. \\n\\n3. The proposed framework incorporates evidential deep learning into the training process. Could you provide details on how the proposed framework affects computational time, specifically in terms of training and inference times compared to the baseline RoMa method?\", \"questions\": \"Please refer to the weaknesses section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"* W1. Implementation details are missing\\n\\nWe added the required details in A.1 of the revised paper. \\n\\n* W2. Needs stronger argument for why corruption and adversarial robustness is important for dense feature matching\\n\\nFeature matching models are typically trained with 3D supervision, i.e., ground truth matches are established by using 3D information including camera pose and depth. 3D datasets are more expensive to collect and are usually smaller in size compared to 2D datasets. For example, the MegaDepth dataset used in our paper contains 260k images, while the scale of modern 2D image datasets is usually in millions (ImageNet-1k:1.3M, ImageNet-21k: 14M, JFT: 303M). The relatively small amount of training data may limit the model robustness to testing data that differ from the training distribution. Previous benchmarking datasets like IMC2022 and WxBS focus on evaluating images captured in different conditions, e.g., viewpoints, timings (day vs. night, years apart) and illuminations. However, these benchmarks do not consider those corruptions that are likely to occur in real world, e.g., various kinds of imaging noise, blurring caused by camera motion, reduced visibility caused by adverse weather. Adversarial attacks serve as a worst-case analysis of model robustness. When deploying the feature matching model in safety-critical applications, it is essential to understand the lower bound of correctness. Adversarial robustness is thus investigated in our paper. We have modified the Introduction section to better highlight the relevance of the proposed study.\\n\\n* W3. Outdated evaluations for robustness\\n\\nWe added experiments on 3D Common Corruptions (3DCC) [Kar et al., 2022] and CosPGD [Agnihotri et al., 2024] in A.2 of the revised paper. For 3DCC, we evaluate on three corruption types (low light noise, ISO noise and color quantization). Our method demonstrates significant and consistent advantage over RoMa, achieving up to 9.4% increase in AUC@5 on MegaDepth-1500-3DCC. We also observe that for corruption types that require depth information for generation, e.g., motion blur and fog 3D, the generation code of 3DCC cannot work for the MegaDepth dataset, resulting in unrealistic corrupted images (some examples are provided in Fig.9). We need more time to look into the 3DCC code to adjust the parameters and thus do not provide the results for other corruption types in 3DCC. For CosPGD, our method achieves up to 2.8% gain over RoMa. \\n\\n* W4. Using 2D Common Corruptions on other known datasets is not always a novel contribution\\n\\nWe have modified our contribution as ``We propose to evaluate the robustness of feature matching methods under common image corruptions and adversarial attacks, which has not been studied in previous work\\\".\\n\\n* W5. Almost redundant presentation of results in Table 1 and Figure 3\\n\\nIn Table 1, we not only provide the mean AUC for each corruption type, but also the clean AUC and the mean AUC for all corruption types, which cannot be achieved by drawing dashed lines in each subfigure of Figure 3. Also, we believe it is informative to provide a table summarizing the numerical results for each dataset. Therefore, we choose to keep Table 1.\\n\\n* Q1. Evaluation details for adversarial attacks\\n\\nThe attacks are L-infinity norm bounded. The valid image space is [-2.1, 2.6] (the images are first scaled to [0,1] and then mean-std normalized by using ImageNet mean and std values). Attacks with epsilon=1 approximately modify pixel value magnitude by 20%, which is noticeable yet still semantically meaningful to human eyes. In practice, adversarial attacks are desired to be imperceptible so it is unlikely to use such large epsilon values. Here we experiment with [0, 1] to reveal the trend of degradation under different perturbation budgets. We have modified the description in Section 4.3 to provide the evaluation details to avoid confusion. \\n\\n* Q2. Novelty of using evidential learning for pixel-matching task and difference form [Wang et al., 2022]\\n\\nThe different distributions mentioned in the comment (Normal Inverse-Gamma (NIG) distribution in [Wang et al., 2022] vs. Dirichlet distribution in our work) stem from the different formulations of evidential learning: [Wang et al., 2022] use the deep evidential regression formulation [Amini et al., NeurIPS2020], while ours use the classic classification formulation [Sensoy et al., NeurIPS2018]. The nature of task (stereo matching in [Wang et al., 2022] and certainty estimation in our work) necessitate different choice of formulations. The formulation in [Wang et al., 2022] cannot be used for our task -- it assumes a Gaussian distribution of predicted variable, which does not hold for a classification task. Therefore, despite both works dealing with pixel-matching task, it is formulated in very different ways. Our formulation of using evidential learning for certainty estimation has not been explored in previous work, which constitutes a novel contribution.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"comment\": \"Dear Reviewer oVDg,\\n\\nThanks for your further comments. We ran the experiment suggested by the reviewer. We observe that reducing the score to 0.025 improves RoMa's performance by 2.5% on MegaDepth-1500 corrupted by Gaussian noise at severity level 5. The best result 31.1% is obtained when we further reduce the score threshold to 0.01. Compared to our result of 37.1%, there is still a gap of 6%. This suggests that changing the threshold can only partially address the problem. We agree with the reviewer that it is relevant to know whether RoMa can be made to work on corrupted images simply by changing the score threshold. We will add these additional results in our revision.\\n\\nYes. Your understanding is correct. We still use the same balanced sampling algorithm as in RoMa.\"}", "{\"summary\": \"The authors propose modelling certainty of correspondences in dense matchers using an evidential deep learning approach. Instead of just estimating a certainty between 0 and 1, the model outputs the parameters of a Dirichlet distribution over probabilities of the two classes \\\"the predicted correspondence is reliable\\\" and \\\"the predicted correspondence is unreliable\\\". The certainty output is then the expected probability of the first class according to this Dirichlet distribution. The authors show experimentally by retraining the dense matcher RoMa that their approach leads to improved certainty scores, in particular leading to increased robustness to image corruptions.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper proposes a simple tweak to the RoMa-method, which gives good results experimentally.\\n2. Within the framework of the RoMa matcher, certainties are updated iteratively in each warp refinement step by a \\\"logit-offset\\\" in the original model. It seems more intuitive to let each refinement step produce positive evidence values for the two classes \\\"correct\\\" and \\\"incorrect\\\" that are summed over the steps, as is done in this paper. Perhaps, the authors could expand on this in the paper.\\n3. In general, outputting good certainties is an underexplored part of deep learning for 3D vision. The application of evidence based learning to dense matchers is novel.\", \"weaknesses\": \"1. The experiments are limited in that only a single model is tested. Hence, it is difficult to say if the improvements generalize to other models.\\n2. The explanation of evidential deep learning was difficult to understand, and I had to refer back to the original paper by Sensoy et al. I think this section could be improved.\\n3. Evidential deep learning has a built-in uncertainty measure. In the context of the present paper, we get a Dirichlet distribution over the classes \\\"the predicted correspondence is reliable\\\" and \\\"the predicted correspondence is unreliable\\\", and the associated uncertainty describes how spread out this Dirichlet distribution is. This uncertainty is however not used in the present approach. This makes it a bit difficult to interpret the method. We get a Dirichlet distribution but only use its expected value over the first class. This expected value should signify correspondence reliability, but there is also an uncertainty of this prediction inherent in the Dirichlet distribution, which is not used. Since RoMa uses regression-by-classification in the coarse matching step, a more natural approach may be to reformulate the loss for that classification over $N\\\\times N$ image patches as evidential and use the uncertainty of the predicted Dirichlet distribution as an uncertainty score.\\n\\n\\n### Post-rebuttal:\\n- The authors have added experiments with DKM to address weakness 1.\\n- Weakness 2 has been addressed.\\n- Weakness 3 has not been addressed but is left for future work.\\n- My questions below have also been answered satisfactorily.\\n\\nLooking at the other reviews there are two main remaining weaknesses\\n- The novelty is quite limited.\\n- The results are not significantly better than RoMa, except under major image degradations.\\n\\nAll in all, I will raise my score to \\\"accept\\\", but note the two weaknesses above.\", \"questions\": \"1. How much does the threshold used for balanced sampling matter for the robustness under image degradations? Both in the original RoMa and the new model.\\n2. My reading is that computationally, the new certainty estimation is more or less as heavy as the old in terms of inference and training speed. Is this correct?\\n3. Is the increased performance due to only better certainties? For example, is the performance the same if certainty scores from the new model are combined with matches from the original RoMa?\\n4. Is there a way to make use of the built in uncertainty in the Dirichlet distribution as described in Weakness 3?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
4NTrco82W0
Beyond Squared Error: Exploring Loss Design for Enhanced Training of Generative Flow Networks
[ "Rui Hu", "Yifan Zhang", "Zhuoran Li", "Longbo Huang" ]
Generative Flow Networks (GFlowNets) are a novel class of generative models designed to sample from unnormalized distributions and have found applications in various important tasks, attracting great research interest in their training algorithms. In general, GFlowNets are trained by fitting the forward flow to the backward flow on sampled training objects. Prior work focused on the choice of training objects, parameterizations, sampling and resampling strategies, and backward policies, aiming to enhance credit assignment, exploration, or exploitation of the training process. However, the choice of regression loss, which can highly influence the exploration and exploitation behavior of the under-training policy, has been overlooked. Due to the lack of theoretical understanding for choosing an appropriate regression loss, most existing algorithms train the flow network by minimizing the squared error of the forward and backward flows in log-space, i.e., using the quadratic regression loss. In this work, we rigorously prove that distinct regression losses correspond to specific divergence measures, enabling us to design and analyze regression losses according to the desired properties of the corresponding divergence measures. Specifically, we examine two key properties: zero-forcing and zero-avoiding, where the former promotes exploitation and higher rewards, and the latter encourages exploration and enhances diversity. Based on our theoretical framework, we propose three novel regression losses, namely, Shifted-Cosh, Linex(1/2), and Linex(1). We evaluate them across three benchmarks: hyper-grid, bit-sequence generation, and molecule generation. Our proposed losses are compatible with most existing training algorithms, and significantly improve the performances of the algorithms concerning convergence speed, sample diversity, and robustness.
[ "GFlowNet", "Generative Models", "f-Divergence", "Loss Function" ]
Accept (Spotlight)
https://openreview.net/pdf?id=4NTrco82W0
https://openreview.net/forum?id=4NTrco82W0
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xWdfqiEEIa", "thk7G84gkr", "rFLI14xcEL", "pyQYCkOuvH", "oIJ7wuJGNO", "lasxDn5za6", "l0ODPPBKd4", "g7g5uVu4Ga", "dA0oEBnznO", "Rn2KSrGoHJ", "PhUUuzLGRn", "PFDyqg6opa", "Ob2i5hQ7Fk", "OZpGOxoUkP", "KnG3v3Xjle", "EgnVkXknsz", "EWq2ajIj7m", "A5LTOJHnT9", "3mIWZPymAN", "2cyK8X7iQg", "1Dbjby00LS" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "comment", "official_comment", "decision", "official_review", "official_comment", "meta_review", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment" ], "note_created": [ 1732504360183, 1732578930915, 1732379160300, 1732376848940, 1732376746782, 1732579964613, 1732376535191, 1732849651073, 1732709584261, 1732849526431, 1737523966214, 1730539630429, 1732539585529, 1734738902375, 1732588829549, 1732376893580, 1730948197080, 1732539583791, 1730752584053, 1732539828925, 1732376374841 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission9174/Reviewer_Xe2i" ], [ "ICLR.cc/2025/Conference/Submission9174/Reviewer_EhxP" ], [ "ICLR.cc/2025/Conference/Submission9174/Reviewer_8npT" ], [ "ICLR.cc/2025/Conference/Submission9174/Authors" ], [ "ICLR.cc/2025/Conference/Submission9174/Authors" ], [ "ICLR.cc/2025/Conference/Submission9174/Reviewer_EhxP" ], [ "ICLR.cc/2025/Conference/Submission9174/Authors" ], [ "ICLR.cc/2025/Conference/Submission9174/Authors" ], [ "~Eliezer_de_Souza_da_Silva1" ], [ "ICLR.cc/2025/Conference/Submission9174/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission9174/Reviewer_8npT" ], [ "ICLR.cc/2025/Conference/Submission9174/Authors" ], [ "ICLR.cc/2025/Conference/Submission9174/Area_Chair_9rBq" ], [ "ICLR.cc/2025/Conference/Submission9174/Authors" ], [ "ICLR.cc/2025/Conference/Submission9174/Authors" ], [ "ICLR.cc/2025/Conference/Submission9174/Reviewer_Xe2i" ], [ "ICLR.cc/2025/Conference/Submission9174/Authors" ], [ "ICLR.cc/2025/Conference/Submission9174/Reviewer_EhxP" ], [ "ICLR.cc/2025/Conference/Submission9174/Authors" ], [ "ICLR.cc/2025/Conference/Submission9174/Authors" ] ], "structured_content_str": [ "{\"comment\": \"Thank you, this is helpful and I increase my score.\"}", "{\"comment\": \"The rebuttal clarified my concerns. Score is updated. Thanks.\"}", "{\"title\": \"Revision score\", \"comment\": \"Thank you, this addresses a large portion of my requests. The paper is more detailed and better off this way. I am updating my score from 6/10 to 8/10.\"}", "{\"title\": \"Author Responses(1/2)\", \"comment\": \"Thanks for your time and effort in reviewing our paper! Please find our responses to your comments below. We will be happy to answer any further questions you may have.\\n\\n> **W1**: it would be nice to have a few more choices of losses, taking inspiration let say from f-divergences (Reverse-KL, JSD, etc.)\\n> **W2**: there may be more than just zero-forcing and zero-avoiding to the key properties of loss functions hence why studying more losses would be helpful\\n\\nFollowing your suggestion, we have further developed five novel divergence-based loss functions, including forward and reverse $\\\\chi^2$ distance, total variation, symmetric KL divergence, and Jensen-Shannon divergence. We conducted experiments on the bit-sequence generation task and observed that losses with the same zero-forcing/zero-avoiding properties lead to similar behaviors (see Tables 1, 2, and 3 below). This finding suggests that zero-forcing and zero-avoiding are the primary properties and the four representative losses discussed in our paper effectively capture their impacts. We have included the new loss functions and experimental results in **Appendix F** in the revision.\\n\\nIt is very interesting to thoroughly explore other properties and effects of regression losses within the realm of $f$-divergence and beyond. We consider this an important topic for future research. We believe that the systematic framework presented in this work\\u2014comprising a unified framework that generalizes the training objectives of GFlowNets by identifying five key components, as well as the two-way connection between $f$-divergence and the regression loss function $g$\\u2014will be highly beneficial.\\n\\nP.S. The commonly used quadratic loss corresponds to reverse KL divergence, while our proposed Linex(1) loss corresponds to forward KL divergence.\", \"table_1\": \"Five novel loss functions\\n| Loss | Divergence | Zero-forcing | Zero-avoiding |\\n| -- | -- |:--:|:--:|\\n| $g(t)=\\\\frac{1}{4}(e^{2t}-2t-1)$ | Forward $\\\\chi^2$ | | $\\\\checkmark$ |\\n| $g(t)=e^{-t}+t-1$ | Reverse $\\\\chi^2$ | $\\\\checkmark$ | |\\n| $g(t)=\\\\frac{1}{2}\\\\|t\\\\|$ | Total Variance | | |\\n| $g(t)=\\\\frac{1}{2}(e^t+\\\\frac{1}{2}t^2-t-1)$ | Symmetric KL | $\\\\checkmark$ | $\\\\checkmark$ |\\n| $g(t)=\\\\frac{1}{2}\\\\int_{1}^t\\\\log\\\\frac{e^x+1}{2}dx$ | Jensen-Shannon | | |\", \"table_2\": \"The number of runs that find all modes within 250k steps, and the median of the steps before they find all modes.\\n| | Zero-forcing | Zero-avoiding | TB | DB | STB |\\n|--|:--:|:--:|:--:|:--:|:--:|\\n| Reverse KL (baseline) | $\\\\checkmark$ | | $1/5$, $\\\\ \\\\ \\\\ -\\\\ \\\\ \\\\ \\\\ $ | $\\\\underline{5/5}$, $13.4k$ | $4/5$, $\\\\ 50.6k\\\\ $ |\\n| Reverse $\\\\chi^2$ | $\\\\checkmark$ | | $0/5$, $\\\\ \\\\ \\\\ -\\\\ \\\\ \\\\ \\\\ $ | $0/5$, $\\\\ \\\\ -\\\\ \\\\ \\\\ $ | $0/5$, $\\\\ \\\\ \\\\ -\\\\ \\\\ \\\\ $ |\\n| Forward KL | | $\\\\checkmark$ | $\\\\underline{5/5}$, $\\\\ 98.0k\\\\ $ | $\\\\underline{5/5}$, $10.8k$ | $\\\\underline{5/5}$, $\\\\ 20.3k\\\\ $ |\\n| Forward $\\\\chi^2$ | | $\\\\checkmark$ | $\\\\underline{5/5}$, $\\\\ \\\\mathbf{80.3k}$ | $\\\\underline{5/5}$, $\\\\ \\\\mathbf{8.1k}\\\\ $ | $\\\\underline{5/5}$, $\\\\ \\\\mathbf{10.2k}$ |\\n| Hellinger | | | $\\\\underline{5/5}$, $111.2k$ | $\\\\underline{5/5}$, $11.7k$ | $\\\\underline{5/5}$, $\\\\ 55.9k\\\\ $ |\\n| Total Variation | | | $1/5$, $\\\\ \\\\ \\\\ -\\\\ \\\\ \\\\ \\\\ $ | $\\\\underline{5/5}$, $47.1k$ | $2/5$, $\\\\ \\\\ \\\\ -\\\\ \\\\ \\\\ \\\\ $ |\\n| Jensen-Shannon | | | $4/5$, $162.2k$ | $\\\\underline{5/5}$, $12.8k$ | $3/5$, $165.2k$ |\\n| Shifted-Cosh | $\\\\checkmark$ | $\\\\checkmark$ | $4/5$, $\\\\ 92.2k\\\\ $ | $0/5$, $\\\\ \\\\ -\\\\ \\\\ \\\\ $ | $\\\\underline{5/5}$, $\\\\ 90.0k\\\\ $ |\\n| Symmetric KL | $\\\\checkmark$ | $\\\\checkmark$ | $4/5$, $122.2k$ | $\\\\underline{5/5}$, $13.7k$ | $\\\\underline{5/5}$, $\\\\ 27.5k\\\\ $ |\", \"table_3\": \"The Spearman correlation between $P_T$ and $P_R$ over a test set (the higher the better). The failed runs where modal collapse happened are eliminated.\\n| | Zero-forcing | Zero-avoiding | TB | DB | STB |\\n|--|:--:|:--:|:--:|:--:|:--:|\\n| Reverse KL (baseline)$| $\\\\checkmark$ | | $\\\\underline{0.8081}(\\\\pm0.0159)$| $0.7907(\\\\pm0.0175)$ | $\\\\underline{0.8088}(\\\\pm0.0169)$|\\n| Reverse $\\\\chi^2$ | $\\\\checkmark$ | | $\\\\underline{0.8074}(\\\\pm0.0129)$| - | $\\\\underline{0.7899}(\\\\pm0.0166)$|\\n| Forward KL | | $\\\\checkmark$ | $0.7421(\\\\pm0.0216)$ | $0.7464(\\\\pm0.0107)$ | $0.7517(\\\\pm0.0246)$ |\\n| Forward $\\\\chi^2$ | | $\\\\checkmark$ | $0.7507(\\\\pm0.0174)$ | $0.7266(\\\\pm0.0178)$ | $0.7439(\\\\pm0.0126)$ |\\n| Hellinger | | | $0.7454(\\\\pm0.0021)$ | $0.7580(\\\\pm0.0132)$ | $0.7711(\\\\pm0.0190)$ |\\n| Total Variation | | | $\\\\underline{0.7893}(\\\\pm0.0144)$| $0.7266(\\\\pm0.0178)$ | - |\\n| Jensen-Shannon | | | $\\\\underline{0.7852}(\\\\pm0.0256)$| $0.7542(\\\\pm0.0046)$ | $0.7640(\\\\pm0.0213)$ |\\n| Shifted-Cosh | $\\\\checkmark$ | $\\\\checkmark$ | $\\\\mathbf{0.8122}(\\\\pm0.0145)$ | $\\\\mathbf{0.8213}(\\\\pm0.0094)$| $\\\\mathbf{0.8132}(\\\\pm0.0149)$ |\\n| Symmetric KL | $\\\\checkmark$ | $\\\\checkmark$ | $\\\\underline{0.7908}(\\\\pm0.0235)$| $0.7630(\\\\pm0.0097)$ | $\\\\underline{0.7886}(\\\\pm0.0227)$|\"}", "{\"title\": \"Author Responses(1/1)\", \"comment\": \"Thanks for your time and effort in reviewing our paper! Please find our responses to your comments below. We will be happy to answer any further questions you may have.\\n\\n> **W1**: The empirical results seem to be weak; they are only varied in synthetic tasks.\\n\\nWe emphasize that the tasks we conducted are popular and are widely adopted in the GFlowNets literature, e.g., [1][2][3][4]. In addition, the tasks are actually very challenging with high state and action dimensions. \\n\\nFor instance, the molecule generation task is a challenging real-world application. The reward is a prediction of the binding energy of a molecule to a particular protein target sEH. There are up to $10^{16}$ valid states and between $100$ to $2000$ actions depending on the state.\\n\\nThe bit-sequence generation task is also difficult. Under the configuration we choose ($n=120$, $k=8$), there are more than $10^{36}$ valid states and a range of $256$ to $3840$ actions, as the sequence is generated in a non-autoregressive manner.\\n\\n> **Q1**: Making a literature connection with f-GAN, which also uses f-divergence in GANs, might be insightful to readers.\\n\\nThe original training objectives of both GFlowNets and GANs are theoretically linked to the reverse KL divergence. The f-GAN framework allows for the use of a broader class of divergence measures, specifically $f$-divergences, in training generative samplers. Similar approaches have also been applied to other algorithms, including VAE, VI, DPG, and DPO. \\n\\nInspired by these efforts, we established the connection between $f$-divergence and the regression loss function $g$ within the training objectives of GFlowNets, based on which we derive novel loss functions for GFlowNets from various divergence measures. To demonstrate the effectiveness of these new loss functions, we conducted experiments across three different tasks: hyper-grid generation, bit-sequence generation, and molecule generation.\\n\\nFollowing your suggestion, we have revised the relevant paragraphs in **Section 2** in the revision.\\n\\n> **Q2**: Include a discussion connecting with off-policy exploration methods. Are your loss and off-policy search orthogonal? Which means, is your loss function combined with an off-policy method (e.g., local search) better than the TB loss combined with an off-policy method?\\n\\nOur loss functions and off-policy exploration methods are almost orthogonal. \\n\\nFirstly, our proposed loss functions, when combined with different exploration strategies, provide valid training objective functions, in the sense that the target distribution is perfectly matched if and only if the loss becomes zero.\\n\\nSecondly, our experiments in three different environments adopting forward policy and $\\\\epsilon$-noisy forward policy as the exploration strategy, have shown the robustness of our analysis on different $g$ functions concerning the deviation of $\\\\mu$ from the desired one in Theorem 4.1.\\n\\nConsequently, our loss functions can, in principle, be integrated with existing GFN training methods, including off-policy exploration strategies like local search, and be expected to preserve the exploration/exploitation features in such cases. \\n\\nTo better explain this in the paper, we have included the above discussion at the end of **Section 4** in the revision.\\n\\n> **Q3**: It's good to see the categorization of prior GFlowNet works. Can you include this recent work [1] that uses genetic search as an off-policy method for training GFlowNets and provide some discussion?\\n\\nThank you for providing the latest related work in the area. We have included it in our revision (**Section 4.1** and **Appendix A.2**).\\n\\nWe hope our responses fully address your concerns. If so, we wonder if you could kindly consider raising your rating score? We will also be happy to answer any further questions you may have. Thank you very much!\\n\\n[1]Tiapkin, et al. \\\"Generative flow networks as entropy-regularized rl.\\\" ICAIS 2024.\\n\\n[2]Bengio, et al. \\\"Flow network based generative models for non-iterative diverse candidate generation.\\\" NeurIPS 2021\\n\\n[3]Malkin, et al. \\\"Trajectory balance: Improved credit assignment in gflownets.\\\" NeurIPS 2022\\n\\n[4]Madan, et al. \\\"Learning gflownets from partial episodes for improved convergence and stability.\\\" ICML 2023\"}", "{\"title\": \"Last comment before camera ready\", \"comment\": \"This is a good paper, but there still exist some minor points that can make it more professional.\\n\\nBefore submitting the camera-ready version (no need to revise this in discussion period), please revise the references; some of them have already been published in the venue but are still marked as arXiv preprints. \\n\\n**Tip**: Do not rely on citation systems like Google Scholar (they are often suboptimal and not up-to-date); try to manually create a BibTeX file on your own (with strict rules). I have listed below the papers that have been published yet are noted as arXiv in your paper:\\n\\n\\n---\\n\\nGflownet foundations --> JMLR\\n\\nOrder-preserving gflownets --> ICLR\", \"extreme_q_learning\": \"Maxent rl without entropy --> ICLR\\n\\nGenerative flow networks assisted biological sequence editing --> NeurIPS \\n\\nAmortizing intractable inference in large language models --> ICLR\\n\\nLearning energy decompositions for partial inference of gflownets --> ICLR\\n\\nPessimistic backward policy for gflownets --> NeurIPS\\n\\nLearning to scale logits for temperature-conditional gflownets --> ICML\\n\\nLocal search gflownets --> ICLR\", \"qgfn\": \"Controllable greediness with action values --> NeurIPS\\n\\nGflownets and variational inference --> ICLR\\n\\nGenerative augmented flow networks --> ICLR\\n\\nAmortizing intractable inference in diffusion models for vision, language, and control --> NeurIPS\", \"diffusion_generative_flow_samplers\": \"Improving learning signals through partial trajectory optimization --> ICLR\\n\\nDistributional gflownets with quantile flows --> TMLR\"}", "{\"title\": \"Author Responses(2/2)\", \"comment\": \"> **W2**: Although the novelty lies in extending GFlowNet loss functions, there are similar attempts in reinforcement learning and generative models.\\n\\nWhile our work was inspired by these attempts in other areas, we are advancing beyond those efforts in several ways.\\n\\nFirstly, we introduced a novel framework that unifies the different training objective functions of GFlowNets. This framework enables us to identify the key components of these objective functions. \\n\\nSecondly, we established a two-way connection between the $f$-divergence and the regression loss function $g$, which allows us not only to derive $g$ from a well-known $f$, but also to analyze any arbitrary $g$ using the corresponding $f$ (for example, the shifted-cosh loss and its related divergence).\\n\\nThirdly, by utilizing the regression loss function $g$ instead of directly minimizing the $f$-divergence, our method can be effectively applied to off-policy or even offline training settings.\\n\\nTo better connect with this line of work and to highlight our novel contributions in comparison to existing efforts, we have revised **Section 2** in our revision.\\n\\n > **W3**: Although the paper derives theoretical properties of zero-forcing and zero-avoiding, it lacks direct theoretical comparison with existing GFlowNet training algorithms.\\n\\nOur proposed method is orthogonal to almost all existing GFlowNet training algorithms. Indeed, one of our main contributions is the development of a unified framework that identifies five key components involved in GFlowNet training algorithms: backward policy, training objectives, parameterization mapping, sampling/resampling weights, and regression loss. Most existing algorithms have primarily focused on all these components except for regression loss, making our work the first to investigate this critical yet often overlooked aspect. As a result, our proposed loss functions can, in principle, be integrated with most existing GFlowNet training methods, while still preserving their exploration and exploitation features.\\n\\nTo provide a better clarification, we included a comparison of our theoretical results with previous findings in **Section 2**, along with a discussion on the compatibility of our methods with various exploration strategies at the end of **Section 4** in the revision.\\n\\n> **Q1**: Could the authors clarify if they\\u2019ve noticed stability shifts in higher-dimensional or complex tasks and if adjustments might bolster robustness?\\n\\nWe appreciate the reviewer's observation on this important issue. In fact, tackling higher-dimensional or complex tasks is still an open problem and is yet fully addressed. \\n\\nEmpirically, we increased the dimension in the hyper-grid environment in our experiments (from $20^4$ states to $20^5$ states). We observe that the baseline always fails to fit the distribution, while our proposed losses remain robust in most of the cases. These results have been included in **Section 5.1** in the revision.\\n\\nStudying the scalability and robustness of GFlowNet training algorithms is a very interesting topic for future research.\\n\\n> **Q3**: Lastly, insights into each loss function\\u2019s hyperparameter sensitivity and effects on convergence guarantees would further clarify their resilience and adaptability.\\n\\nFrom a theoretical aspect, although all the $f$-divergences and $g$ functions are convex, there are differences in the smoothness and strongly-convexity that may influence the optimal selection of hyperparameters. Empirically, we follow the training configurations of prior work, e.g., [1][2], in our experiments. Our results indicate that the proposed loss functions are not sensitive to the choice of hyperparameters. Our choice of hyperparameters and other experimental details can be found in **Appendix E**.\\n\\nWe hope our responses fully address your concerns. If so, we wonder if you could kindly consider raising your rating score? We will also be happy to answer any further questions you may have. Thank you very much! \\n\\n[1]Tiapkin, et al. \\\"Generative flow networks as entropy-regularized rl.\\\" ICAIS 2024.\\n\\n[2]https://github.com/GFNOrg/gflownet\"}", "{\"comment\": \"We have updated the references in our latest revision. Thank you for your kind advice!\"}", "{\"title\": \"Some relevant references\", \"comment\": \"We appreciated the authors' contributions to this domain and wanted to highlight some missing relevant references.\\n\\nIn our recent work, published at NeurIPS 2024, we investigated the properties of $\\\\alpha$-divergences (including forward and reverse Kullback-Leibler (KL), R\\\\'enyi-$\\\\alpha$, and Tsallis-$\\\\alpha$ divergences) in the context of training GFlowNets. In particular, we also presented the trade-offs between zero-forcing and zero-avoiding behaviors for this family of learning objectives. \\n\\nWhile this paper approaches the problem differently, we believe that acknowledging our work would provide additional context for understanding the theoretical and practical aspects of GFlowNet training. \\n\\nAdditionally, we noticed that Heiko Zimmermann et al.'s work exploring the relationship between GFlowNets and Variational Inference (VI), appears to be missing from the related discussion. Including this reference would enrich the paper's perspective and be appropriate for a complete overview of recent works.\\n\\nThank you for considering our feedback. We hope this fosters a richer discussion and continued progress in the field!\", \"references\": \"1. Tiago Silva, Eliezer de Souza da Silva, and Diego Mesquita. *On Divergence Measures for Training GFlowNets.* NeurIPS, 2024. [Link](https://openreview.net/forum?id=N5H4z0Pzvn) \\n2. Heiko Zimmermann et al. *A Variational Perspective on Generative Flow Networks.* Transactions on Machine Learning Research, 2023. [Link](https://openreview.net/forum?id=AZ4GobeSLq)\"}", "{\"comment\": \"Thank you for pointing to relevant references. We have now included them in our latest revision.\\n\\nTo connect your studies with ours, the forward KL divergence corresponds to our proposed Linex(1) loss, while the Tsallis-$\\\\alpha$ divergence with $\\\\alpha=0.5$ corresponds to our proposed Linex(1/2) loss (up to a multiplicative constant). Following our analysis, the performance gain from using such divergence measures may stem from the zero-avoiding or non-zero-forcing properties. Additionally, by utilizing the $g$ function instead of directly optimizing the $f$-divergence, our method can accommodate balance conditions beyond just TB, and allows for off-policy exploration strategies.\\n\\nWe agree that approaching this problem from different perspectives enhances our understanding of both the theoretical and practical aspects of GFlowNet training. We really appreciate your comments.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Spotlight)\"}", "{\"summary\": \"The authors propose to modify the loss function of Gflownet (which has been completely overlooked by prior work). They show that dinstinct losses lead to different divergences. They propose three new loss functions, evaluate them extensively on diverse benchmarks.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": [\"Contributions:\", \"generalizing the objective function of gflownet\", \"derive impact of loss function on the gradient\", \"define zero-forcing (encourage epxloitation) and zero-avoiding (encourage exploration) as two key properties induced by certain loss functions\", \"They create 3 new losses (alonside the existing quadratic loss) to tackle all 4 possible combinations (with/without zero-avoiding, with/without zero-forcing)\", \"Linex(1) corresponds to the KL divergence\", \"experiments on 3 datasets\", \"Non-zero-forcing losses (Linex(1) and Linex(0.5)) converge faster on hyper-grid\", \"Linex(1) obtains all the modes almost always the fastest, but spearman corr between train and test is highest for shifted-cos on bit-sequence\", \"Linex(1) tends increase diversity while quadratic and shifted-cos give higher quality (high average rewards) on molecule generation\", \"Paper is well written.\"], \"weaknesses\": [\"it would be nice to have a few more choices of losses, taking inspiration let say from f-divergences (Reverse-KL, JSD, etc.)\", \"there may be more than just zero-forcing and zero-avoiding to the key properties of loss functions hence why studying more losses would be helpful\", \"it would be nice to let say consider hybrid methods with some kind of annealing. For example, why not use Linex(1) for its fast convergence to a large number of nodes, before then transitioning to shifted-cos for higher rewards around those now-discovered modes.\", \"So to me, the paper is great, but its kind of stopping too quickly, it feels like its only just tapping the surface. These kinds of ideas would be easy to test out and add to the papers.\", \"If the authors add bit more meat to the paper, i.e. extra loss functions and hybrid-annealing (as discussed above), I would likely increase my score. Like I said, its just missing a little bit of filling to make it a great paper.\"], \"questions\": \"See Weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you very much for your response, which help siginifcantly improve our paper! We really appreciate your time and effort in reviewing our paper.\"}", "{\"metareview\": \"In the paper, the authors addressed the issue of choosing regression loss in Generative Flow Networks (GFlowNets). They demonstrated a connection between regression losses and specific divergence measures. This connection enables the systematic design and evaluation of regression losses, tailored to the unique properties of their corresponding divergence measures.\\n\\nAll the reviewers agree that the theoretical results are novel and insightful. The presentation of the paper is clear and easy to follow. After the rebuttal, most of the remaining concerns of the reviewers are addressed and all the reviewers are happy with the current stage of the paper. \\n\\nWhile there are some concerns about the limited experimental results, I believe that the current novelty and contribution of the paper are sufficient for ICLR. Therefore, I recommend accepting the paper at the current stage. The authors are encouraged to incorporate the feedbacks and comments of the reviewers into the camera-ready version of their paper.\", \"additional_comments_on_reviewer_discussion\": \"Please refer to the meta-review.\"}", "{\"comment\": \"Thank you very much for raising your rating and your kind advice! We will be sure to update the references as suggested.\"}", "{\"title\": \"Author Responses(2/2)\", \"comment\": \"> **W3**: it would be nice to let say consider hybrid methods with some kind of annealing. For example, why not use Linex(1) for its fast convergence to a large number of nodes, before then transitioning to shifted-cos for higher rewards around those now-discovered modes.\\n\\nThank you for your kind suggestion. It is quite an interesting idea! However, after our initial attempt, we decided not to include it in this paper and instead consider it as potential future work. There are two main reasons for this decision.\\n\\nFirst, the annealing loss did not perform as well as we had expected during our experiments. We suspect this is because the convergence point of different losses, i.e., the best approximation of the target distribution with respect to distinct divergence measures, can vary significantly, especially in complex real-world tasks. A simple example is that the best Gaussian approximation of a mixture of Gaussians w.r.t reverse KL is to fit the dominant peak, while that w.r.t forward KL tends to cover the whole support. Therefore, a more in-depth exploration is required to determine how to handle this process effectively.\\n\\nSecond, achieving faster convergence is not the primary motivation for using zero-avoiding losses. In many real-world scenarios, the focus may be on enabling the trained GFlowNet to provide a diverse range of candidates. Considering this, it may be more preferred to offer different options for different desires than presenting a single seemingly \\\"optimal\\\" solution.\\n\\nNonetheless, it remains intriguing to investigate whether it is possible to achieve the \\\"best of both worlds\\\" or even the \\\"best of all worlds\\\" with hybrid annealing strategies, and we plan to leave that for future research. \\n\\n\\nWe hope our responses fully address your concerns. If so, we wonder if you could kindly consider raising your rating score? We will also be happy to answer any further questions you may have. Thank you very much!\"}", "{\"summary\": \"This paper presents a novel framework for GFlowNet objective functions, unifying existing training algorithms and clarifying key components. By establishing a connection between objective functions and divergence measures, it offers valuable insights into designing effective training objectives. The authors investigate key regression properties\\u2014zero-forcing and zero-avoiding\\u2014and propose three new loss functions (Linex(1), Linex(1/2), and Shifted-Cosh) to balance exploration and exploitation. Extensive experiments on benchmarks, including hyper-grid, bit-sequence, and molecule generation, show that these losses outperform the common squared loss in convergence speed, diversity, quality, and robustness.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"This paper introduces a systematic framework for designing regression losses in GFlowNet training, linking each loss to specific divergence measures for targeted properties. Resulting in three new losses\\u2014Shifted-Cosh, Linex(1/2), and Linex(1)\\u2014that enhance exploration and exploitation balance.\", \"weaknesses\": \"Broader exploration of other potential divergence-based losses would offer a more comprehensive understanding of the effects of different divergence properties on GFlowNet training. Although the novelty lies in extending GFlowNet loss functions, there are similar attempts in reinforcement learning and generative models. Although the paper derives theoretical properties of zero-forcing and zero-avoiding, it lacks direct theoretical comparison with existing GFlowNet training algorithms.\", \"questions\": \"Could the authors clarify if they\\u2019ve noticed stability shifts in higher-dimensional or complex tasks and if adjustments might bolster robustness? Additionally, what drives the choice of a limited set of losses\\u2014are there theoretical or practical reasons for omitting other f-divergences, like Hellinger? Lastly, insights into each loss function\\u2019s hyperparameter sensitivity and effects on convergence guarantees would further clarify their resilience and adaptability.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you very much for your response, which help siginifcantly improve our paper! We really appreciate your time and effort in reviewing our paper.\"}", "{\"summary\": \"This paper presents a novel theoretical finding for GFlowNets regarding their objective function. Using f-divergence theories, they connect existing objectives of GFlowNets and show that they are special cases of the squared loss. They design a new loss structure that combines both properties together: (1) zero forcing (as considered in existing losses) and (2) zero avoiding, which compensates for exploration. Their new loss function seems to have empirical benefits.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The paper is well-written and easy to follow.\\n\\n\\n2. The theories are insightful.\", \"weaknesses\": \"1. The empirical results seem to be weak; they are only varied in synthetic tasks\", \"questions\": \"Areas for improvement and suggestions:\\n\\n1. Making a literature connection with f-GAN, which also uses f-divergence in GANs, might be insightful to readers.\\n\\n\\n2. Include a discussion connecting with off-policy exploration methods. Are your loss and off-policy search orthogonal? Which means, is your loss function combined with an off-policy method (e.g., local search) better than the TB loss combined with an off-policy method?\\n\\n\\n3. It's good to see the categorization of prior GFlowNet works. Can you include this recent work [1] that uses genetic search as an off-policy method for training GFlowNets and provide some discussion?\\n\\n\\n\\n[1] Hyeonah Kim et al., \\\"Genetic-guided GFlowNets for Sample Efficient Molecular Optimization,\\\" NeurIPS 2024.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Reminder to Reviewer EhxP\", \"comment\": \"Dear Reviewer,\\n\\nThank you for your time and effort in reviewing our paper.\\n\\nWe hope our response has adequately addressed your concerns. If you feel that our rebuttal has clarified the issues raised, we kindly ask you to consider adjusting your score accordingly. Should you have any further questions or need additional clarification, we would be more than happy to discuss them with you.\\n\\nThank you once again for your valuable feedback.\\n\\nBest regards,\\n\\nAuthors\"}", "{\"title\": \"Author Responses(1/2)\", \"comment\": \"Thanks for your time and effort in reviewing our paper! Please find our responses to your comments below. We will be happy to answer any further questions you may have.\\n\\n> **W1**: Broader exploration of other potential divergence-based losses would offer a more comprehensive understanding of the effects of different divergence properties on GFlowNet training.\\n\\n> **Q2**: Additionally, what drives the choice of a limited set of losses\\u2014are there theoretical or practical reasons for omitting other f-divergences, like Hellinger?\\n\\nOur primary contribution lies in establishing a systematic framework for analyzing and designing loss functions for GFlowNets, rather than focusing on specific loss functions. With a theoretical understanding of the zero-forcing and zero-avoiding properties, we propose novel representative loss functions to demonstrate their effects.\\n\\nFollowing your suggestion, we have further developed five novel divergence-based loss functions, including forward and reverse $\\\\chi^2$ distance, total variation, symmetric KL divergence, and Jensen-Shannon divergence. We conducted experiments on the bit-sequence generation task and observed that losses with the same zero-forcing/zero-avoiding properties lead to similar behaviors (see Tables 1, 2, and 3 below). This finding suggests that zero-forcing and zero-avoiding are the primary properties and the four representative losses discussed in our paper effectively capture their impacts. We have included the new loss functions and experimental results in **Appendix F** in the revision. Further exploring other important divergence properties of loss functions would be an interesting future research topic.\", \"regarding_the_hellinger_distance\": \"It is a special case of $\\\\alpha$-divergence when $\\\\alpha=0.5$, based on which we derived Linex(1/2). We have adjusted the caption of **Table 2** in our revision for better clarification.\", \"table_1\": \"Five novel loss functions\\n| Loss | Divergence | Zero-forcing | Zero-avoiding |\\n| -- | -- |:--:|:--:|\\n| $g(t)=\\\\frac{1}{4}(e^{2t}-2t-1)$ | Forward $\\\\chi^2$ | | $\\\\checkmark$ |\\n| $g(t)=e^{-t}+t-1$ | Reverse $\\\\chi^2$ | $\\\\checkmark$ | |\\n| $g(t)=\\\\frac{1}{2}\\\\|t\\\\|$ | Total Variance | | |\\n| $g(t)=\\\\frac{1}{2}(e^t+\\\\frac{1}{2}t^2-t-1)$ | Symmetric KL | $\\\\checkmark$ | $\\\\checkmark$ |\\n| $g(t)=\\\\frac{1}{2}\\\\int_{1}^t\\\\log\\\\frac{e^x+1}{2}dx$ | Jensen-Shannon | | |\", \"table_2\": \"The number of runs that find all modes within 250k steps, and the median of the steps before they find all modes.\\n| | Zero-forcing | Zero-avoiding | TB | DB | STB |\\n|--|:--:|:--:|:--:|:--:|:--:|\\n| Reverse KL (baseline) | $\\\\checkmark$ | | $1/5$, $\\\\ \\\\ \\\\ -\\\\ \\\\ \\\\ \\\\ $ | $\\\\underline{5/5}$, $13.4k$ | $4/5$, $\\\\ 50.6k\\\\ $ |\\n| Reverse $\\\\chi^2$ | $\\\\checkmark$ | | $0/5$, $\\\\ \\\\ \\\\ -\\\\ \\\\ \\\\ \\\\ $ | $0/5$, $\\\\ \\\\ -\\\\ \\\\ \\\\ $ | $0/5$, $\\\\ \\\\ \\\\ -\\\\ \\\\ \\\\ $ |\\n| Forward KL | | $\\\\checkmark$ | $\\\\underline{5/5}$, $\\\\ 98.0k\\\\ $ | $\\\\underline{5/5}$, $10.8k$ | $\\\\underline{5/5}$, $\\\\ 20.3k\\\\ $ |\\n| Forward $\\\\chi^2$ | | $\\\\checkmark$ | $\\\\underline{5/5}$, $\\\\ \\\\mathbf{80.3k}$ | $\\\\underline{5/5}$, $\\\\ \\\\mathbf{8.1k}\\\\ $ | $\\\\underline{5/5}$, $\\\\ \\\\mathbf{10.2k}$ |\\n| Hellinger | | | $\\\\underline{5/5}$, $111.2k$ | $\\\\underline{5/5}$, $11.7k$ | $\\\\underline{5/5}$, $\\\\ 55.9k\\\\ $ |\\n| Total Variation | | | $1/5$, $\\\\ \\\\ \\\\ -\\\\ \\\\ \\\\ \\\\ $ | $\\\\underline{5/5}$, $47.1k$ | $2/5$, $\\\\ \\\\ \\\\ -\\\\ \\\\ \\\\ \\\\ $ |\\n| Jensen-Shannon | | | $4/5$, $162.2k$ | $\\\\underline{5/5}$, $12.8k$ | $3/5$, $165.2k$ |\\n| Shifted-Cosh | $\\\\checkmark$ | $\\\\checkmark$ | $4/5$, $\\\\ 92.2k\\\\ $ | $0/5$, $\\\\ \\\\ -\\\\ \\\\ \\\\ $ | $\\\\underline{5/5}$, $\\\\ 90.0k\\\\ $ |\\n| Symmetric KL | $\\\\checkmark$ | $\\\\checkmark$ | $4/5$, $122.2k$ | $\\\\underline{5/5}$, $13.7k$ | $\\\\underline{5/5}$, $\\\\ 27.5k\\\\ $ |\", \"table_3\": \"The Spearman correlation between $P_T$ and $P_R$ over a test set (the higher the better). The failed runs where modal collapse happened are eliminated.\\n| | Zero-forcing | Zero-avoiding | TB | DB | STB |\\n|--|:--:|:--:|:--:|:--:|:--:|\\n| Reverse KL (baseline)$| $\\\\checkmark$ | | $\\\\underline{0.8081}(\\\\pm0.0159)$| $0.7907(\\\\pm0.0175)$ | $\\\\underline{0.8088}(\\\\pm0.0169)$|\\n| Reverse $\\\\chi^2$ | $\\\\checkmark$ | | $\\\\underline{0.8074}(\\\\pm0.0129)$| - | $\\\\underline{0.7899}(\\\\pm0.0166)$|\\n| Forward KL | | $\\\\checkmark$ | $0.7421(\\\\pm0.0216)$ | $0.7464(\\\\pm0.0107)$ | $0.7517(\\\\pm0.0246)$ |\\n| Forward $\\\\chi^2$ | | $\\\\checkmark$ | $0.7507(\\\\pm0.0174)$ | $0.7266(\\\\pm0.0178)$ | $0.7439(\\\\pm0.0126)$ |\\n| Hellinger | | | $0.7454(\\\\pm0.0021)$ | $0.7580(\\\\pm0.0132)$ | $0.7711(\\\\pm0.0190)$ |\\n| Total Variation | | | $\\\\underline{0.7893}(\\\\pm0.0144)$| $0.7266(\\\\pm0.0178)$ | - |\\n| Jensen-Shannon | | | $\\\\underline{0.7852}(\\\\pm0.0256)$| $0.7542(\\\\pm0.0046)$ | $0.7640(\\\\pm0.0213)$ |\\n| Shifted-Cosh | $\\\\checkmark$ | $\\\\checkmark$ | $\\\\mathbf{0.8122}(\\\\pm0.0145)$ | $\\\\mathbf{0.8213}(\\\\pm0.0094)$| $\\\\mathbf{0.8132}(\\\\pm0.0149)$ |\\n| Symmetric KL | $\\\\checkmark$ | $\\\\checkmark$ | $\\\\underline{0.7908}(\\\\pm0.0235)$| $0.7630(\\\\pm0.0097)$ | $\\\\underline{0.7886}(\\\\pm0.0227)$|\"}" ] }
4NRjdISWby
LoCA: Location-Aware Cosine Adaptation for Parameter-Efficient Fine-Tuning
[ "Zhekai Du", "Yinjie Min", "Jingjing Li", "Ke Lu", "Changliang Zou", "Liuhua Peng", "Tingjin Chu", "Mingming Gong" ]
Low-rank adaptation (LoRA) has become a prevalent method for adapting pre-trained large language models to downstream tasks. However, the simple low-rank decomposition form may constrain the optimization flexibility. To address this limitation, we introduce Location-aware Cosine Adaptation (LoCA), a novel frequency-domain parameter-efficient fine-tuning method based on inverse Discrete Cosine Transform (iDCT) with selective locations of learnable components. We begin with a comprehensive theoretical comparison between frequency-domain and low-rank decompositions for fine-tuning pre-trained large models. Our analysis reveals that frequency-domain decomposition with carefully selected frequency components can surpass the expressivity of traditional low-rank-based methods. Furthermore, we demonstrate that iDCT offers a more efficient implementation compared to inverse Discrete Fourier Transform (iDFT), allowing for better selection and tuning of frequency components while maintaining equivalent expressivity to the optimal iDFT-based adaptation. By employing finite-difference approximation to estimate gradients for discrete locations of learnable coefficients on the DCT spectrum, LoCA dynamically selects the most informative frequency components during training. Experiments on diverse language and vision fine-tuning tasks demonstrate that LoCA offers enhanced parameter efficiency while maintains computational feasibility comparable to low-rank-based methods.
[ "Parameter-efficient fine-tuning", "discrete cosine transform", "transfer learning", "adaptation" ]
Accept (Poster)
https://openreview.net/pdf?id=4NRjdISWby
https://openreview.net/forum?id=4NRjdISWby
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zhL2wsuchn", "zg4P2osaMG", "zSSXaeIMsk", "z8e64ZeAYk", "wgqRV4fiQA", "wPhUH8ix7w", "u0fnTueSET", "trYOGTWooL", "tiWjY2Jch3", "rYmURn5cUZ", "r7xoIEuAt4", "qVt3K6qluz", "qM8ZG6bZYl", "q2U3T48iRo", "p3fd2dM0dx", "o0oZka2DMY", "nqJlGxi5ty", "nfuV5LCOZ0", "cbzlkjqA8y", "bqn1nGy9DO", "bopG5ibQHZ", "bIWBNjJusD", "aT5pyLFclH", "Z1S8yHzJ90", "YEWBYxKVi6", "Y18JomnG5b", "SUAgzrJFzF", "QhOZlqRpmy", "P76D2FAvnp", "O4jOQUSt0b", "O2gmnYm6xI", "Ni1VYc6T3h", "JBVr67BNul", "IoYqAZhC2c", "Ij84AY9ykG", "Gb4xndjnHE", "F3rwuocxU8", "ECJKFD28Tw", "DuJVWjOubA", "B35BQP2vjZ", "ANij5cPoN2", "97VfuesU5a", "8ITtSbAzIy", "7OPpdCihuf", "65yQUtxqQj", "5nFXxCr9Qt", "4snN37xd3S", "3TC1aLWPEj", "1UMPbPyzcJ" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_review", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment" ], "note_created": [ 1731929431454, 1731928806803, 1732502527026, 1732502232730, 1731929634665, 1730192559599, 1732568862422, 1731928367121, 1731929280711, 1732298232239, 1731928576010, 1732298318514, 1732641162001, 1732502624376, 1732298107634, 1733023530800, 1731235285983, 1731928877482, 1732344310160, 1731928456531, 1732502445814, 1732296387063, 1732297160604, 1730105251834, 1732641756213, 1732086899182, 1731929872172, 1730059844635, 1732640777963, 1733127513029, 1732502397130, 1731928141997, 1732247266000, 1732502280227, 1731928955949, 1732502356757, 1734677759696, 1731929085546, 1731928258492, 1730351541487, 1731928668822, 1737523577113, 1732640200958, 1731929952146, 1732506019949, 1731929831317, 1732294115891, 1730545616063, 1731929700408 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission3456/Authors" ], [ "ICLR.cc/2025/Conference/Submission3456/Authors" ], [ "ICLR.cc/2025/Conference/Submission3456/Authors" ], [ "ICLR.cc/2025/Conference/Submission3456/Authors" ], [ "ICLR.cc/2025/Conference/Submission3456/Authors" ], [ "ICLR.cc/2025/Conference/Submission3456/Reviewer_96ov" ], [ "ICLR.cc/2025/Conference/Submission3456/Reviewer_STG6" ], [ "ICLR.cc/2025/Conference/Submission3456/Authors" ], [ "ICLR.cc/2025/Conference/Submission3456/Authors" ], [ "ICLR.cc/2025/Conference/Submission3456/Reviewer_STG6" ], [ "ICLR.cc/2025/Conference/Submission3456/Authors" ], [ "ICLR.cc/2025/Conference/Submission3456/Reviewer_STG6" ], [ "ICLR.cc/2025/Conference/Submission3456/Reviewer_STG6" ], [ "ICLR.cc/2025/Conference/Submission3456/Authors" ], [ "ICLR.cc/2025/Conference/Submission3456/Reviewer_STG6" ], [ "ICLR.cc/2025/Conference/Submission3456/Reviewer_kgMQ" ], [ "ICLR.cc/2025/Conference/Submission3456/Reviewer_kgMQ" ], [ "ICLR.cc/2025/Conference/Submission3456/Authors" ], [ "ICLR.cc/2025/Conference/Submission3456/Reviewer_vduh" ], [ "ICLR.cc/2025/Conference/Submission3456/Authors" ], [ "ICLR.cc/2025/Conference/Submission3456/Authors" ], [ "ICLR.cc/2025/Conference/Submission3456/Reviewer_STG6" ], [ "ICLR.cc/2025/Conference/Submission3456/Reviewer_STG6" ], [ "ICLR.cc/2025/Conference/Submission3456/Reviewer_vduh" ], [ "ICLR.cc/2025/Conference/Submission3456/Reviewer_STG6" ], [ "ICLR.cc/2025/Conference/Submission3456/Reviewer_3NDS" ], [ "ICLR.cc/2025/Conference/Submission3456/Authors" ], [ "ICLR.cc/2025/Conference/Submission3456/Reviewer_STG6" ], [ "ICLR.cc/2025/Conference/Submission3456/Reviewer_STG6" ], [ "ICLR.cc/2025/Conference/Submission3456/Authors" ], [ "ICLR.cc/2025/Conference/Submission3456/Authors" ], [ "ICLR.cc/2025/Conference/Submission3456/Authors" ], [ "ICLR.cc/2025/Conference/Submission3456/Reviewer_8m9H" ], [ "ICLR.cc/2025/Conference/Submission3456/Authors" ], [ "ICLR.cc/2025/Conference/Submission3456/Authors" ], [ "ICLR.cc/2025/Conference/Submission3456/Authors" ], [ "ICLR.cc/2025/Conference/Submission3456/Area_Chair_c9wo" ], [ "ICLR.cc/2025/Conference/Submission3456/Authors" ], [ "ICLR.cc/2025/Conference/Submission3456/Authors" ], [ "ICLR.cc/2025/Conference/Submission3456/Reviewer_8m9H" ], [ "ICLR.cc/2025/Conference/Submission3456/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission3456/Reviewer_STG6" ], [ "ICLR.cc/2025/Conference/Submission3456/Authors" ], [ "ICLR.cc/2025/Conference/Submission3456/Reviewer_vduh" ], [ "ICLR.cc/2025/Conference/Submission3456/Authors" ], [ "ICLR.cc/2025/Conference/Submission3456/Area_Chair_c9wo" ], [ "ICLR.cc/2025/Conference/Submission3456/Reviewer_3NDS" ], [ "ICLR.cc/2025/Conference/Submission3456/Authors" ] ], "structured_content_str": [ "{\"title\": \"Responese to Reviewer vduh, Part 2\", \"comment\": \"### Additional benchmarks for fine-tuning Roberta (Question 2).\\nThanks for the comment. We would like to clarify our rationale for benchmark choices and explain how we have enhanced our evaluation scope in the revised manuscript.\\n\\nFor RoBERTa fine-tuning experiments, we utilized the GLUE benchmark, which has become the de facto standard for evaluating PEFT methods in recent literature. Our evaluation comprehensively covers 8 tasks from GLUE, which is more extensive than recent PEFT works such as VeRA and FourierFT that only evaluated on 6 tasks. These 8 tasks encompass diverse aspects of language understanding, providing a comprehensive assessment of model capabilities.\\n\\nTo further strengthen our evaluation, we have added a new section (`Section 5.2`: Natural Language Generation) in the revised manuscript, which evaluates different PEFT methods on the E2E NLG Challenge dataset using GPT-2 Medium and Large models. The E2E NLG Challenge is a widely-adopted benchmark in the PEFT community. This addition complements our GLUE evaluation by examining performance on generation tasks, offering a more complete picture of our method's effectiveness across different types of language tasks.\\n\\nWe believe this combination of comprehensive GLUE evaluation and additional NLG experiments provides robust validation of our method's effectiveness, while maintaining comparability with existing PEFT literature.\\n\\n### Comparison with other LoRA variants (Question 3).\\nThanks for the comment. We have included comparisons with DoRA [R2] (`Table 1`) and VeRA [R3] (`Table 1` and `Table 2`) as baseline methods. Specifically, DoRA was included in our original submission, while VeRA was added in response to reviewer STG6's suggestion. We acknowledge the importance of comprehensive comparisons and believe our current baseline methods provide a strong foundation for evaluating our proposed approach.\\n\\n[R2] Liu et al., Dora: Weight-decomposed low-rank adaptation, ICML, 2024.\\n\\n[R3] Kopiczko et al., Vera: Vector-based random matrix adaptation, ICLR, 2024.\\n\\n### Performance scaling on Figure 3 (Question 4).\\nThanks for the comment. We have conducted additional experiments with $r=16$ for both LoRA (91.23%) and LoCA (91.34%) on QQP. While there is indeed a slight improvement, it is worth noting that the y-axis scale in Figure 3 spans a relatively small range, indicating that the performance gains are actually quite marginal. Moreover, due to QQP's large dataset size, experiments with other rank values are computationally intensive. The observed pattern suggests that the performance is approaching saturation, making $r=8$ a reasonable trade-off between computational efficiency and model performance for practical applications.\"}", "{\"title\": \"Responese to Reviewer 8m9H, Part 1\", \"comment\": \"Thanks for the careful reading and thoughtful comments. In the following, we address each of the concerns raised in detail.\\n\\n### Computational complexity of alternating optimization and central difference approximation (Weakness 1).\\nThanks for the concern about the computational complexity of LoCA. We would like to clarify several points that demonstrate LoCA's computational efficiency.\\n\\nAlthough LoCA employs alternating optimization, it sequentially optimizes coefficients and locations rather than simultaneously. This means the computational overhead per iteration remains constant compared to coefficient-only optimization. During each iteration, we only update one set of parameters while keeping the other fixed.\\n\\nRegarding the central difference approximation, we have provided the expression in `Section 4.3` and a formal complexity analysis in `Appendix I`. The gradient computation for locations can be efficiently implemented by reusing the DCT of the gradient matrix across all locations and coefficients (the code can be found in the `Supplementary Material`). This leads to computational complexity comparable to coefficient-only optimization. In practice, the training time per iteration remains stable throughout the training process.\\n\\nTo address similar concerns from Reviewers 3NDS and vduh, we have conducted comprehensive empirical studies comparing LoRA, FourierFT, and LoCA across different datasets, model scales, and parameter budgets. Our results (`Table 10, Appendix J` in the revised paper) demonstrate that the practical running time of LoCA is comparable to FourierFT and only marginally slower than LoRA (which benefits from highly optimized GPU implementations). Besides, LoCA consistently shows lower memory consumption than FourierFT, though both require slightly more memory than LoRA.\\n\\nFurthermore, we would like to identify that our current fast DCT implementation uses FFT, which introduces some overhead. A specialized fast DCT implementation in Pytorch could improve efficiency. We leave this efficiency improvements in future implementations. Beside, DCT is theoretically more efficient than FFT for real-valued data, as FFT's complex number operations introduce unnecessary computations\"}", "{\"title\": \"Response to Feedback of Part 5\", \"comment\": \"Thanks for the feedback. We would like to further address your concerns as follows.\\n\\n**Order of optimization and its impact:** From an optimization landscape perspective, coefficient optimization with fixed locations represents a convex subproblem, while location optimization alone may lead to numerous local optima due to the discrete nature of locations. Starting with the more well-behaved subproblem helps establish stable convergence trajectories.\\n\\nBesides, the alternating strategy operates at a very fine granularity, with switches between coefficient and location optimization occurring every 10-20 steps ($\\\\mathcal{B}_a$ = 10 and $\\\\mathcal{B}_l$ = 20). The optimization process goes through many cycles of alternation, making the initial ordering less consequential to the final outcome. This is analogous to cyclic coordinate descent methods, where the order of variable updates becomes less important when multiple passes are made through the optimization cycle. The key factor is maintaining a consistent alternation frequency that allows both variable types to adapt and converge together.\\n\\n**Interactions between coefficients and locations:** Our alternating strategy actually captures the essential interactions between coefficients and locations through several mechanisms:\\n\\nFirst, the alternating nature of updates allows each variable to adapt to changes in the other, creating an implicit feedback loop that captures their interdependencies. Second, this approach is theoretically grounded in block coordinate descent methods, which have been proven to converge to stationary points under mild conditions. As shown by [], alternating optimization can achieve the same convergence rate as joint optimization under mild conditions, while being more computationally stable. The primary difference lies in the constant factors rather than the asymptotic behavior. Our extensive experimental results across various tasks (Tables 1-4, Figure 2) demonstrate that the current alternating strategy achieves strong performance while maintaining reliable convergence.\\n\\n\\n**Gradient approximation methods:**\\nCentral difference offers better stability by considering both directions of perturbation, making it more robust to the asymmetric nature of the optimization landscape around discrete location points.\\n\\nWe have added the performance of LoCA (central difference) as a baseline for clear comparison in `Table 5` of the revised manuscript. (As requested by Reviewers kgMQ and 3NDS, we now report the results for $\\\\mathcal{B}=3000$ and $\\\\mathcal{B}=10,000$ in `Table 4`. However, the results for $\\\\mathcal{B}=5000$ can still be found in the original version). Our experiments in `Table 5` also verified that although forward and backward differences sometimes achieve similar results as central differences, central differences produce more stable results across tasks.\"}", "{\"title\": \"New revised verison of the paper\", \"comment\": \"We thank the reviewers again for their feedback on our response. Based on comments, we have made additional revisions to our paper. Specifically,\\n\\n* To address the concern about weak dependence and deviations from i.i.d. behavior, we have added a comprehensive analysis in `Appendix P`, where we quantitatively examine the impact of parameter correlations through both numerical simulation experiments and statistical tests, see page 39-40.\\n\\n* The results of LoCA's current implementation are added to Table 5 to prevent the misconception that forward/backward difference approximations outperform central difference approximation.\\n\\nWe hope that the current version addresses the reviewer's concerns.\"}", "{\"title\": \"Responese to Reviewer STG6, Part 1\", \"comment\": \"Thanks for the time and effort in providing detailed feedback. We have carefully considered all comments and provide comprehensive responses below.\\n\\n### Inadequate empirical support for theoretical claims (Weakness 1).\\nThanks for the detailed comments. However, we would like to clarify several misunderstandings:\\n\\n* Clarification of Central Claim:\\nThe interpretation of our central claim is not accurate. Our central claim is that optimal frequency-domain-based reconstruction can be achieved through individual selection of frequency coefficients and their optimal locations, rather than through random selection of frequency components. The comparison between random frequency selection and LoRA is a secondary finding in our theoretical framework.\\n\\n* Experimental Validation for the Theoretical Finding:\\nPlease note that the results we obtained in Theorem 1 show that the Fourier method with randomly selected frequency component locations yields a larger **expected reconstruction loss** compared to the optimal approximation based on low-rank decomposition. FourierFT is a very valuable and meaningful work. Our comparison is conducted solely from the perspective of **reconstruction**, which has direct experimental validation. Specifically, Figures 6 and 7 in `Appendix G` provide rigorous experimental validation of our theoretical claims across different ranks and dimensionalities of weight matrixes, where 'R' represents the reconstruction capability of low-rank decomposition and 'U' represents that of random frequency selection in Fourier decomposition. These experiments directly validate our theoretical findings.\\n\\n* Apparent Contradiction with FourierFT's Results:\\nRegarding the apparent contradiction with FourierFT's empirical results in its Appendix C.2, it is important to note that our analysis focuses on matrix reconstruction capability, which, while important, may not directly translate to downstream task performance in all scenarios. As we explicitly discuss in `Section 5.5` (**Performance under Different Parameter Budgets**), our theoretical analysis concerns expected performance rather than performance in every specific case. Task-specific structures may indeed allow FourierFT to outperform LoRA in certain instances without contradicting our theoretical framework.\\n\\n* Selective FourierFT and LoCA Evaluation:\\nPlease refer to the response of Question 4.\\n\\n### Omitted comparison with VeRA (Weakness 2).\\nThanks for the comment. In the revised manuscript, we have included comparisons with VeRA. Specifically, we have added VeRA as a baseline on the GLUE benchmark (`Table 1`) and included a new section (`Section 5.2`) on Natural Language Generation, where we evaluate our method against VeRA on the E2E NLG Challenge dataset (`Table 2`). Please refer to the revised manuscript for details.\\n\\n### Parameter overhead due to optimizable locations (Weakness 3 & Question 2).\\nThanks for the concern about parameter overhead. However, we would like to clarify several important points regarding LoCA's actual computational and memory efficiency. First, the optimization of location parameters only occurs during the initial training phase (typically the first 10% of iterations), similar to the dynamic parameter allocation strategy successfully employed in AdaLoRA. After this initial phase, we do not regard locations as trainable parameters and no gradient computation is required.\\n\\nRegarding the parameter count, while LoCA does require storing additional location parameters, the actual storage overhead is minimal. For instance, storing 150,000 integer location parameters only adds approximately 0.57MB to disk storage - a negligible increase compared to the base model's size. More importantly, the parameter count does not directly mean runtime memory usage or computational efficiency. To see this, we have conducted comprehensive empirical evaluations comparing LoCA, LoRA, and FourierFT across different datasets, model scales, and parameter budgets. The results have been added to the revised paper (`Table 10`) and corresponding analysis section (`Appendix J`). As demonstrated in `Table 10`, LoCA actually shows comparable or better memory efficiency compared to FourierFT across different model scales and datasets. This is partly because FourierFT requires complex-domain computations to obtain real-valued network parameters, leading to additional memory overhead. Therefore, while LoCA may have a higher parameter count, its practical scalability and efficiency in terms of actual memory usage and computation time remain competitive.\"}", "{\"summary\": \"The paper titled \\\"LoCA: Location-Aware Cosine Adaptation for Parameter-Efficient Fine-Tuning\\\" introduces a novel parameter-efficient fine-tuning (PEFT) method called Location-Aware Cosine Adaptation (LoCA). This method is designed to adapt pre-trained large language models (LLMs) to downstream tasks with improved optimization flexibility and parameter efficiency. LoCA is based on the inverse Discrete Cosine Transform (iDCT) and selectively tunes learnable components at specific locations\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. LoCA introduces a novel approach for parameter-efficient fine-tuning in the frequency domain through the inverse Discrete Cosine Transform (iDCT) and selective learning of frequency components. This method demonstrates the potential to surpass traditional low-rank decomposition techniques both theoretically and empirically, which is of significant value for resource-constrained environments and the deployment of large-scale models. Furthermore, your work provides a comprehensive theoretical analysis comparing frequency domain methods with low-rank decomposition approaches, which is meaningful.\\n2. The methodology section of the paper is rigorous, and the experiments cover multiple domains, including natural language processing and computer vision. The paper offers comparisons with existing techniques, such as LoRA and FourierFT, which help readers understand the performance and efficiency of LoCA. Additionally, the in-depth theoretical analysis provides a solid foundation for frequency domain parameter-efficient fine-tuning methods.\\n3. The writing of the paper is clear and logically structured, with a coherent flow from the introduction to the methodology, experimental results, and conclusions. In particular, the detailed explanation of how LoCA operates, including the application of the inverse Discrete Cosine Transform and the alternating optimization strategy, enhances the reader's understanding of the relevant work.\", \"weaknesses\": \"1. The paper contains a limited amount of content related to RELATED WORK, with insufficient coverage of the existing literature in the field.\\n2. While the experimental results are convincing, the paper could further expand the experimental section to include the verification of LoCA's performance on more datasets. Additionally, a more in-depth analysis of LoCA's performance on different model scales and tasks of varying complexity would help to further demonstrate its applicability and robustness.\", \"questions\": \"1.The paper primarily compares LoCA with LoRA-related fine-tuning techniques. Has consideration been given to performance comparisons with other fine-tuning methods such as prompt learning and adapter tuning?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your detailed follow-up response. Your clarification regarding the theoretical focus on approximating the normally distributed updates $\\\\Delta W$ from full fine-tuning, rather than assuming identical distribution in LoCA, has resolved my earlier confusion. The additional analysis on weak dependence and the applicability of the Central Limit Theorem (CLT), supported by controlled correlation structures and the identification of critical thresholds for $\\\\rho$, provides evidence of LoCA\\u2019s robustness even under substantial dependencies. I am now satisfied with your explanation, and I appreciate the validation and updates you have provided.\"}", "{\"title\": \"Responese to Reviewer kgMQ, Part 2\", \"comment\": \"### Limited scope of theoretical analysis (Weakness 3).\\nThanks. Although a unified theoretical analysis encompassing all low-rank methods may not be feasible, we can still conduct case-by-case analysis, as all low-rank-based methods have an inherent upper bound on reconstruction capability. \\n\\nFor a given $\\\\Delta W\\\\in\\\\mathbb{R}^{n\\\\times n}$, VeRA [R2] decomposes it to $\\\\Lambda_bB\\\\Lambda_dA$ where $B,A$ are draw i.i.d. from a certain distribution and frozen and shared over all training steps and layers, $\\\\Lambda_b,\\\\Lambda_d$ are learnable diagonal matrix. From a reconstruction perspective, the $i$-th element of $\\\\Lambda_b$ is the OLS coefficient while setting the response as $i$-th row of $\\\\Delta W$ and covariate as $i$-th row of $B\\\\Lambda_dA$. This idea enables us to find $\\\\Lambda_d$ that maximize the correlation between $i$-th row of $\\\\Delta W$ and $i$-th row of $B\\\\Lambda_dA$. However $A$ and $B$ are chosen randomly independent of $\\\\Delta W$, the reconstruction error is approximately the error we learn from white noise.\\n\\nWe can conduct a detailed theoretical analysis of DoRA [R3], here we only give the outline. For a given $\\\\Delta W$, DoRA first decomposes it as $\\\\Delta W=A\\\\Lambda$ where $\\\\Lambda$ is diagonal and each column of $A$ has magnitude $1$. The $r$-rank approximation is $A_r\\\\Lambda$, where $A_r=U_r\\\\Lambda_rV_r^T$, and $U_r,V_r\\\\in\\\\mathbb{R}^{n\\\\times r}$ and $\\\\Lambda_r$ contains $r$ largest eigenvalues of $A$. If each element in $\\\\Delta W$ follows i.i.d. standard normal, we can derive the independency of $A$ and $\\\\Lambda$. Using total expectation, we have the following reconstruction loss\\n$$\\n\\\\mathbb{E}(\\\\|A\\\\Lambda-A_r\\\\Lambda\\\\|^2)=\\\\mathbb{E}\\\\{\\\\mathbb{E}(\\\\|A\\\\Lambda-A_r\\\\Lambda\\\\|^2|A)\\\\}=\\\\sqrt{2}\\\\dfrac{\\\\Gamma((n+1)/2)}{\\\\Gamma(n/2)}\\\\mathbb{E}(\\\\|A-A_r\\\\|^2)\\n$$\\nAs each non-zero element in $\\\\Lambda$ follows i.i.d. $\\\\chi(n)$ distribution. Subsequent calculations only require computing the reconstruction loss based on the distribution of $A$. At this point, the reconstruction loss is consistent with the LoRA method, except that the distributions are different. This requires complex calculations, but since each column of $A$ is the direction of a random normal vector, the difference should not be significant. The loss corresponding to DoRA should therefore be approximately the same as that of LoRA.\\n\\nWe have supplemented this discussion in `Appendix O` in the revised manuscript.\\n\\n[R2] Kopiczko et al., Vera: Vector-based random matrix adaptation, ICLR, 2024.\\n\\n[R3] Liu et al., Dora: Weight-decomposed low-rank adaptation, ICML, 2024.\\n\\n### Theory-practice alignment (Weakness 4).\\nThanks for the comment. We would like to discuss this point from three key aspects. First, our theoretical analysis primarily focuses on matrix reconstruction capability, which may not perfectly align with downstream task performance since the task performance is influenced by multiple factors, such as random seeds, hyperparameters, etc. This phenomenon is evident in some cases where PEFT outperform FF despite higher reconstruction loss. However, matrix reconstruction serves as a reasonable proxy for model performance without task-specific priors. Second, all our theoretical results are derived in expectation, which analyse average-case behavior. As noted in Section 5.5 (Performance under Different Parameter Budgets), specific task structures may indeed favor low-rank methods or FourierFT in certain instances. These exceptions do not invalidate the general theoretical framework, since LoCA does outperform FourierFT in average performance on GLUE and ViT experiments. Third, the theorem requests to identify the optimal learnable locations for reconstruction. However, the practical implementation relies on gradient approximation, which may not achieve global optimality for all locations. We have acknowledged this limitation and discuss its implications in `Appendix M` (Remark).\\n\\n### Comparison between FourierFT and LoCA with the same number of trainable parameters (Weakness 5 and Question 2).\\nThanks for the comment. Following similar feedback from Reviewer 3NDS, we have conducted additional experiments comparing LoCA and FourierFT under identical parameter budgets (using both 3000 and 10,000 frequency components) for ViT models. The updated results are now presented in `Table 4` of the revised manuscript.\"}", "{\"title\": \"Responese to Reviewer vduh, Part 1\", \"comment\": \"Thanks for the thorough review and insightful comments. Below we address each concern in detail.\\n\\n### Instruction tuning on MathInstruct (Question 1 & Weakness 1).\\nThanks for the valuable suggestion regarding the use of MathInstruct Dataset. Following this recommendation, we have initiated experiments using the MAmmoTH codebase [R1] to evaluate our method on this more mathematically-focused benchmark. As the dataset is relatively large and we are simultaneously conducting other experiments requested by Reviewer kgMQ, 3NDS, and 96ov, we may not be able to provide complete results for both LoCA and all baselines before the deadline. However, we are committed to sharing preliminary findings as soon as they become available.\\n\\nWhile we acknowledge the reviewer's concern about MT-Bench's stability, we would like to respectfully note that MT-Bench and Vicuna have been widely accepted as standard benchmarks for evaluating instruction tuning in the PEFT community. Recent influential works in this field, including DoRA, VeRA, and FourierFT, have all employed these benchmarks for the instruction tuning experiment. To address the stability concern of GPT-4 evaluation, we have taken careful measures in our experimental design: all results reported in Table 3 were obtained through fresh runs using the same version of GPT-4 within the same time period, ensuring a fair comparison across different methods. This controlled setting helps mitigate the potential instability issues in evaluation. Moreover, we have provided detailed example outputs in `Appendix K` to offer qualitative insights into the performance differences between methods.\\n\\nNevertheless, we will continue our ongoing experiments with MathInstruct and update our results accordingly.\\n\\n[R1] Yue, et al., MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning, ICLR, 2024\\n\\n### Analysis on computation and memory costs (Weakness 2).\\nThanks for the comment. We have actually conducted comprehensive analyses of these aspects in our paper, particularly in `Section 4.3`, `Appendices I` and `Appendix J`.\\n\\n**Computational Efficiency.** Our alternating optimization strategy ensures that coefficients and locations are not optimized simultaneously, preventing additional computational overhead compared to coefficient-only optimization. As demonstrated in Section 4.3, our central difference approximation for gradient estimation can be efficiently implemented. The gradient computation in Eq. (5) shows that $Z$ can be reused for all location updates (can be found in the code provided in the `Supplementary Material`), making the additional computation negligible. Our complexity analysis in Appendix I formally proves that the computational complexity remains in the same order as coefficient-only optimization. \\n\\n**Memory Usage and Runtime Performance.** We have conducted extensive empirical comparisons among LoRA, FourierFT, and LoCA across various datasets, model scales, and parameter budgets (detailed in Appendix J, Table 10). Our empirical study reveals that while LoCA theoretically has different asymptotic complexity, its practical running time is comparable to FourierFT and only marginally slower than LoRA (which benefits from highly optimized GPU implementations of matrix operations). Regarding memory consumption, LoCA demonstrates consistently lower memory usage compared to FourierFT, though both methods require slightly more memory than LoRA.\\n\\n**Future Optimization Opportunities.** Our current fast DCT implementation uses FFT, which introduces some computational overhead. We identify potential improvements through specialized fast DCT algorithms.\\nSince our method operates on real-valued data, DCT is theoretically more efficient than FFT as it avoids unnecessary complex number operations.\"}", "{\"title\": \"Feedback of Part 1\", \"comment\": \"Thank you for your detailed responses and the clarifications provided.\\n\\n1. **Reconstruction Loss vs. Downstream Task Performance**: \\n Thanks for your explanation regarding the expected reconstruction loss and its experimental validation through Figures 6 and 7. However, I am still wondering why good reconstruction performance does not directly or indirectly translate into better downstream task performance. Since the primary objective of PEFT approaches is to optimize adaptation for downstream tasks, this discrepancy between reconstruction quality and task performance appears counterintuitive. Could you provide further insights or analysis to explain this phenomenon? Specifically, the results presented in this paper for LoCA, as well as those in the FourierFT paper, suggest unexpected trends where strong reconstruction capabilities do not consistently align with improved downstream task outcomes. Understanding the factors that decouple reconstruction performance from downstream task effectiveness would be crucial for assessing the practical value and broader applicability of LoCA and similar methods.\\n\\n2. **VeRA and NLG Experiments**: \\n Thank you for including VeRA as a baseline and adding evaluations on the NLG dataset. These additions provide a more comprehensive perspective on LoCA's performance, and I am satisfied with this aspect of your revisions.\\n\\n3. **Parameter Efficiency and Initial Gradient Computation**: \\n Regarding the additional parameters introduced by LoCA and the initial gradient computation during the first 10% of iterations, I now agree with your explanation that this is a reasonable design choice. The results in Table 10 and the clarification on memory and computational overhead sufficiently address my concerns about the scalability and efficiency of LoCA.\"}", "{\"title\": \"Responese to Reviewer 3NDS, Part 1\", \"comment\": \"We appreciate the comprehensive evaluation and insightful suggestions. Below we provide detailed responses to address each of the concerns raised.\\n\\n### Results on StanfordCars and FGVC datasets (Weakness 1. & Question 1). \\nThanks for the comment. We would like to emphasize that our implementation is based on the official FourierFT codebase, and we have taken great care to ensure experimental rigor. We use the same ViT models (pretrained on ImageNet-21k) as specified in our implementation details, and all experiments were conducted under identical conditions for fair comparison across methods. All hyperparameters for both our method and baseline methods are thoroughly reported in `Appendix D` to ensure reproducibility and fair comparison.\\n\\nRegarding the specific performance on Stanford Cars and FGVC datasets, we have consulted with other researchers in the PEFT community who have confirmed that their baseline results closely align with ours. To ensure complete transparency and reproducibility, we have made our implementation publicly available in the `Supplementary Materials`. This allows anyone to verify our results and experimental procedures. \\n\\n### Comparison with FourierFT under the same parameter budget (Question 2).\\nThanks for this valuable suggestion. We have conducted additional experiments to compare FourierFT and LoCA under equal parameter budgets (239K and 480K) for ViT models. The results are now updated in `Table 4` in the revised manuscript.\\n\\nFor ViT-base with 239K parameters, LoCA achieves better performance than FourierFT across multiple datasets (e.g., OxfordPets: 94.10% vs 93.44%, DTD: 80.15% vs 79.43%, FGVC: 54.86% vs 52.26%). Similarly, for ViT-large with 480K parameters, LoCA consistently outperforms FourierFT (e.g., StanfordCars: 83.47% vs 82.27%, FGVC: 63.02% vs 56.96%).\\n\\n### Training time and memory cost (Question 3).\\nThanks for the comment. We have conducted comprehensive empirical evaluations comparing LoCA, LoRA, and FourierFT across different datasets, model scales, and parameter budgets. The results have been added to the revised paper (`Table 10`) and corresponding analysis section (`Appendix J`).\\n\\nOur empirical study reveals that while LoCA theoretically has different asymptotic complexity, its practical running time is comparable to FourierFT and only marginally slower than LoRA (which benefits from highly optimized GPU implementations of matrix operations). Regarding memory consumption, LoCA demonstrates consistently lower memory usage compared to FourierFT, though both methods require slightly more memory than LoRA.\", \"we_also_discuss_potential_optimization_opportunities\": \"our current fast DCT implementation relies on FFT, which introduces some computational overhead. A specialized fast DCT algorithm could potentially improve LoCA's efficiency further. Additionally, we note that DCT is theoretically more efficient than FFT for real-valued data (which is our case), as FFT's complex number operations introduce unnecessary computations, leading to slower training speed and more memory cost. These optimizations represent promising directions for future work.\"}", "{\"title\": \"Feedback of Part 2\", \"comment\": \"Thank you for your detailed efforts in addressing my concerns and providing both theoretical justifications and empirical validations. While I am partially satisfied with your explanation, I still have a few critical reservations:\\n\\n1. **Identical Distribution Assumption**:\\n The claim that weight updates are identically distributed due to inherent symmetry in parameter matrices appears insufficient for selective updating methods like LoCA. In such methods, only a subset of weights is updated, and these updates are focused on high-amplitude frequency components that may not be symmetrically distributed or functionally equivalent. This breaks the assumption of identical distribution and warrants further justification.\\n\\n2. **Weak Dependence and CLT Applicability**:\\n While you acknowledge that strict independence does not hold due to gradient-based optimization, you suggest that weak dependence (e.g., $l$-mixing) is sufficient for the Central Limit Theorem (CLT) to apply. The references to statistical theorems ([R1] and [R2]) are appreciated, but their applicability to the specific setting of LoCA remains unclear. Without explicit analysis demonstrating that the weight updates in your framework satisfy the conditions (e.g., mixing rates) required by these theorems, the theoretical foundation for assuming asymptotic normality remains unsubstantiated.\\n\\n3. **Deviations from i.i.d. Behavior**:\\n While you state that extending the proof to account for weak dependence is a technical exercise, it would be highly valuable to quantify how deviations from i.i.d. behavior impact expressivity. The updated version lacks direct analyses on this point. Suggestions such as:\\n - Performing sensitivity analyses in high-amplitude regions.\\n - Providing empirical evidence for layer-specific correlations.\\n - Assessing robustness to localized deviations. \\n\\n These additions would address edge cases and provide a more comprehensive validation of your theoretical claims.\\n\\nI appreciate your efforts and look forward to your response on these points.\"}", "{\"comment\": \"Thank you for your response and for providing empirical evidence from diverse tasks, including NLU, instruction tuning, and vision benchmarks, to support the claim that LoCA\\u2019s improved reconstruction capabilities enhance downstream task performance. The consistent improvements across tasks and parameter budgets offer compelling practical evidence of a positive relationship between reconstruction quality and task adaptability. However, a more direct analysis of the correlation between reconstruction error and downstream task metrics\\u2014such as statistical analysis or visualization across datasets\\u2014would further strengthen this claim. Given the time constraints, I understand if this is left as a direction for future work.\"}", "{\"title\": \"Preliminary Results on MathInstruct\", \"comment\": \"Thanks again for your suggestion. Due to computational resource and time constraints, we have currently conducted preliminary fine-tuning only on the LLaMA-7b base model, comparing FourierFT and LoCA. We conducted FT for 2 epochs, with the batch size set to 16 (gradient accumulation steps = 8). We apply PEFT modules on `q_proj`, `k_proj`, `v_proj`, `up_proj`, `down_proj`, with 50K frequency components for reparameterizing each matrix. Other hyperparameters are maintained the same as shown in Table 8 (e.g., learning rates and scaling values). Below are current results.\", \"in_domain_results\": \"| Method | GSMK | MATH | AQuA | NumGLUE | Avg |\\n|-----------|------|------|------|----------|------|\\n| FourierFT | 51.2 | 29.8 | 43.0 | 58.2 | 45.6 |\\n| LoCA | 52.8 | 28.4 | 45.2 | 59.0 | 46.4 |\", \"out_of_domain_results\": \"| Method | SVAMP | Mathematics | SimulEq | SAT-Math | MMLU-Math | Avg |\\n|-----------|-------|-------------|---------|-----------|------------|------|\\n| FourierFT | 65.2 | 44.8 | 38.4 | 44.2 | 39.4 | 46.4 |\\n| LoCA | 63.8 | 47.2 | 41.5 | 42.8 | 41.7 | 47.0 |\\n\\nWhile these initial results show comparable performance, we acknowledge that a more comprehensive evaluation with different hyperparameter settings and model scales would provide stronger validation. We will continue to explore these points.\"}", "{\"title\": \"Feedback of part 5\", \"comment\": \"Thank you for your detailed response addressing the optimization strategy and its implementation details. However, I still have the following concerns:\\n1. **Order of Optimization and Its Impact**: \\n Your explanation of the coefficient-first approach leveraging initial random location assignments to establish a baseline approximation is logical. However, I am still curious about the potential implications of reversing this order. Optimizing locations first might provide an initial structure to the frequency selection, potentially enabling more informed coefficient updates in subsequent steps. Have you explored this alternative ordering experimentally, and if so, could you share insights on how it impacts convergence stability, training dynamics, and final performance? \\n\\n2. **Capturing Interactions Between Coefficients and Locations**: \\n I understand that simultaneous optimization could present convergence challenges, as you noted. However, given that the interactions between coefficients and locations are central to the performance of LoCA, it would be helpful to understand whether these interactions are adequately captured by the alternating optimization strategy. Have you conducted any empirical or theoretical analyses to quantify the trade-off between simplifying convergence and potentially limiting optimality by separating these updates?\\n\\n3. **Alternative Gradient Approximation Methods**: \\n If the forward or backward difference approximations yielded divergent results in certain cases, it would be interesting to know how these differences manifested in practical performance or stability.\"}", "{\"title\": \"Response to authors' rebuttal\", \"comment\": \"I thank the authors for the detailed responses, most of my concerns are resolved, and I lean positively towards accepting the work.\"}", "{\"summary\": \"The paper proposes a novel low-rank adaptation approach for fine-tuning transformer-based pretrained models by deriving weights from parameters learned in the frequency domain using the iDCT operation. Compared to the similar existing method FourierFT, the approach, in theory, promises better reconstruction of the oracle update matrix. Empirically, the results do improve over the baseline FourierFT approach in most cases, indicating effectiveness of the approach. The paper also provides a method to learn locations in the frequency domain where coefficients are required, which is novel and interesting in the context of PEFT.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The paper takes an existing idea - learning PEFT parameters in the frequency domain and reconstructing the weight matrix from those learned parameters - and performs an in-depth analysis of the approach. Based on the obtained insights, it presents a new method that is theoretically better than existing approaches in approximating the ideal fine-tuning matrix and shows quantitative improvements on the base method, FourierFT, in most cases.\", \"The paper includes intuitive theoretical statements, backed by mathematical proofs, which is good to see in a PEFT paper since existing methods often lack theoretical insights and are heuristic-based.\", \"The paper is also well written and intuitive to follow, with rigorous experiments.\", \"I especially liked the approach presented for learning discrete locations to be optimized, which I believe is a novel contribution.\"], \"weaknesses\": [\"The paper starts with some terminology that are not elaborated: \\u201coptimization space\\u201d and \\u201cflexible optimization\\u201d. These terms are not defined precisely anywhere, nor is their link to the theory or empirical results clear. It would be better to ground the explanation to well-defined terms that are used in the analysis.\", \"According to the reference (Yohai & Maronna, 1979), the initial assumption, the equation in Sec 2, L99, is true only under certain conditions.\", \"This variant of the equation holds true only when $\\\\psi$ is monotone and X to have full rank. However, this issue is not addressed anywhere in the paper. For eg., the matrix X according to the paper is the input matrix. In practical implementation, the dimension X is m*n where (m<n), i.e. for X to be full rank, we need rank(X) = m. This is not stated or supported in the paper.\", \"The theory presented is highly domain-specific: it does not translate to more general PEFT methods such as VeRA or DoRA, and requires significant theoretical adaptations to allow for comparisons with arbitrary low-rank methods.\", \"The theory also does not *always* agree with practice - there are certain cases where LoRA and FourierFT perform better. This indicates come confounding factors, yet no discussion on this has been included. I do not see this as a reason to reject however, as this is commonly seen in this area, but would appreciate a discussion on the same.\", \"In the case of ViT, I would have liked to see comparisons of the proposed approach with FourierFT having the same number of trainable parameters, as done for NLU and IFT tasks.\"], \"questions\": [\"Please see the weaknesses above. A few additional questions are below:\", \"Could a discussion on certain edge cases where the theory does not hold be provided? More precisely, would it be possible to find situations where the assumptions made in theory do not hold well, resulting in a breakdown of expected results?\", \"Additionally, could results for ViT be provided in situations where the number of parameters are the same as parameters in FourierFT, for a more intuitive comparison?\", \"I would also like to see results in the Natural Language Generation task - particularly for the GPT-2 training performed by FourierFT, as it would indicate the effectiveness of the method when ported to a Conv1D based implementation of the MLP.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Responese to Reviewer 8m9H, Part 2\", \"comment\": \"### Noise in frequency component selection (Weakness 2).\\nThis is a very good point that requires further discussion where we can conduct further and more detailed research.\\n\\n**Influence of selection noise.** Intuitively, in scenarios with a unique optimal solution, any deviation from the optimal frequency component locations (i.e., selection noise) can result in amplified reconstruction loss, which manifests as magnitude differences in the cosine matrix. However, in neural networks, the optimal solution typically exists within a subset of the hypothesis space (as demonstrated in [R1], Theorem 5.14, page 48). Our theoretical analysis remains valid under this condition, and this mathematical property significantly mitigates the impact of selection noise. When multiple viable solutions exist, small perturbations in frequency component selection may not substantially degrade performance.\\n\\n**Why does the noise exist?** Since the location learning process is a combinatorial optimization problem, the greedy algorithm should compute the loss after moving the current location to each of its 8 neighboring locations, and then compare to decide how to move the current location (as we disscussed in `Appendix M`). However, this approach is computationally expensive. Therefore, we continuous the location parameters and only use integer rounding to locate the location when calculating the difference between adjacent locations as a gradient, so that the locations can be updated by backpropagation. This, however, introduces discontinuity in the parameters, also leading to noise in the selection process.\\n\\n**Examples of such noise and measures to stabilize the optimization process.** Consider the problem of optimizing location parameters on a one-dimensional discrete segment where possible locations are $\\\\{0, 1, ..., n\\\\}$, with an initial random location $k$. A conventional greedy algorithm would evaluate the expected loss at positions $k-1$, $k$, and $k+1$, selecting the location with minimal loss until convergence. In contrast, our gradient-based approach estimates the differential of expected losses between adjacent locations. However, this introduces a technical challenge: when $k$ is non-integer, the calculation involves locations $\\\\lfloor k \\\\rfloor$ and $\\\\lfloor k \\\\rfloor + 1$, leading to potentially discontinuous gradient estimates depending on $k$'s proximity to integer values.\", \"a_promising_direction_for_future_work_would_be_to_introduce_continuous_interpolation\": \"for a non-integer location $k = 1.2$, we could define $\\\\theta(1.2) = 0.8\\\\theta(1) + 0.2\\\\theta(2)$. This interpolation would allow us to compute left and right derivatives with respect to $\\\\theta(1)$ and $\\\\theta(2)$, potentially yielding smoother, continuous gradient estimates.\\n\\nIn our current implementation, we incorporates two key design choices to stabilize the optimization process: First, we mitigate potential instabilities by early stopping the location selection process after $\\\\mathcal{B}_s$ steps. Second, we intentionally use a smaller learning rate for location updates compared to coefficient updates. This design ensures that frequency components only shift when there is strong and consistent evidence from sufficient training samples, rather than responding to temporary noise-induced gradients.\\n\\n[R1] Asymptotic Statistics, A.W. van der Vaart, 2000.\"}", "{\"comment\": \"The authors have addressed most of my concerns. I hope the results of MathInstruct can be added before the deadline.\"}", "{\"title\": \"Responese to Reviewer kgMQ, Part 3\", \"comment\": \"### Discussion on edge cases (Question 1).\\nOur analysis relies on a key assumption that each element in $\\\\Delta W$ follows i.i.d. $N(0,1)$ in a population perspective. However, there exists an important edge case where LoRA can outperform LoCA. Specifically, when there exist matrices $A,B\\\\in\\\\mathbb{R}^{n\\\\times r}$ such that each element in $\\\\Delta W-AB^T$ follows i.i.d. $N(0,\\\\epsilon)$, where $\\\\epsilon$ is small compared with the magnitude of $AB^T$. In this case, $AB^T$ directly provides a low-rank estimation of $\\\\Delta W$ with small reconstruction error (since $\\\\epsilon$ is small). While LoCA would attempt to approximate this same structure, it necessarily introduces some non-zero reconstruction error in the process.\\nTo summarize, LoRA tends to perform better when the matrix to be reconstructed has a non-zero, low-rank structure as its expectation, and the variance is relatively small compared to the expectation.\\n\\n### Experiments on the NLG task (Question 3).\\nThanks for this valuable suggestion. We have conducted experiments on the NLG task as requested. Specifically, in `Section 5.2` of our revised manuscript, we evaluate our method on the E2E NLG Challenge dataset, comparing LoCA against several baselines including Adapter-based methods, LoRA, VeRA, and FourierFT. As shown in `Table 2`, we tested both GPT-2 Medium and GPT-2 Large models, measuring performance across multiple metrics (BLEU, NIST, METEOR, ROUGE-L, and CIDEr). The hyperparameters are also reported in `Table 7`. These results demonstrate that LoCA achieves competitive or superior performance compared to existing methods, including FourierFT, while maintaining parameter efficiency.\"}", "{\"title\": \"Response to Feedback of Part 4\", \"comment\": [\"Thanks for the question. We would like to address these points as follows:\", \"**Empirical evidence of correlation:**\", \"Our extensive experiments across diverse tasks (NLU, instruction tuning, and computer vision) consistently demonstrate that LoCA's improved reconstruction capabilities translate to better downstream performance compared to random frequency selection methods. Specifically:\", \"In NLU tasks (Table 1), LoCA achieves higher average performance (86.0/88.7) compared to FourierFT (85.4/88.2) across all GLUE tasks\", \"In instruction tuning (Table 3), LoCA shows superior performance on both MT-Bench and Vicuna benchmarks\", \"In vision tasks (Table 4), LoCA consistently outperforms FourierFT across different parameter budgets\", \"This consistent pattern across such diverse tasks strongly suggests a **positive correlation** between reconstruction quality and task performance. However, we acknowledge that this relationship is complex and might not be strictly linear.\", \"**Analysis on task-specific characteristics:** While it is challenging to completely characterize which task properties benefit most from better reconstruction, our experimental results reveal some patterns:\", \"Structure-sensitive tasks: Tasks that require understanding of complex structural relationships (e.g., CoLA for grammaticality judgments) show particularly strong improvements with LoCA. This suggests that better reconstruction of weight matrices helps preserve structural knowledge from pre-training.\", \"Fine-grained classification: In vision tasks like StanfordCars and FGVC that require fine-grained feature discrimination, LoCA's improved reconstruction capabilities appear to be especially beneficial.\", \"Resource-constrained scenarios: As shown in Fig. 3, the advantages of better reconstruction become more pronounced when working with limited parameter budgets, suggesting that efficient parameter utilization is particularly important in resource-constrained settings.\"]}", "{\"title\": \"Feedback of Part 3\", \"comment\": [\"Thank you for your detailed response and clarifications regarding the stable convergence of LoCA and its ability to capture task-relevant information.\", \"1. **Stable Convergence with Dynamic Frequency Selection**:\", \"Thanks for your explanation of the mechanisms used to ensure stable convergence, including finite-difference approximation (Eq. 5), alternating optimization, conservative learning rates, and the intrinsic smoothness of frequency-domain representations. However, several aspects remain unclear and could benefit from further elaboration:\", \"While the finite-difference approximation is described as providing reliable gradient estimates, it is not explained how this method effectively captures the dynamics of shifting frequency components during training.\", \"The learning rate for location parameters is noted as being \\\"significantly smaller\\\" than that for coefficients, but the specific scale or rationale for determining this difference is not provided.\", \"The claim that adjacent frequency components result in smooth and continuous changes assumes a densely sampled frequency spectrum and that neighboring frequencies have similar effects. In practice, especially when selecting a subset of frequencies, this assumption may not always hold.\", \"2. **Risk of Losing Task-Relevant Information**:\", \"Thank you for your response and the clarification regarding LoCA's adaptive gradient-based optimization for selecting frequency components. However, several important points remain unresolved and could benefit from further elaboration:\", \"The initial emphasis on the importance of high-magnitude components for optimal approximation contrasts with the later assertion that LoCA does not explicitly prioritize these components. This inconsistency requires clarification to reconcile the theoretical and practical aspects of the selection process.\", \"The explanation lacks specific details on how gradient-based optimization identifies the most informative frequency components, particularly how it ensures that lower-magnitude but potentially task-relevant frequencies are not overlooked.\", \"The potential risk of neglecting task-relevant lower-magnitude frequencies is addressed by citing empirical performance, but there is no accompanying analysis to confirm whether these frequencies contribute meaningfully to downstream tasks.\", \"Different tasks may depend on different frequency components, including those with lower magnitudes. The current response does not discuss how LoCA adapts to tasks where such frequencies are critical. Providing examples or experimental evidence showing LoCA's adaptability across diverse tasks would strengthen your argument for its generalizability and effectiveness.\"]}", "{\"title\": \"Feedback of Part 4\", \"comment\": \"Thank you for the detailed and comprehensive response addressing my concerns. I appreciate the additional analyses provided in Appendix O and the inclusion of VeRA in Tables 1 and 2, which strengthen the comparison between LoCA and other methods. I am satisfied with your answer.\\n\\nThat said, I am still curious about the relationship between the expected reconstruction error and downstream task performance. While your theoretical analysis and empirical validation clearly demonstrate that LoCA achieves lower reconstruction error compared to random frequency selection, the primary focus of PEFT approaches is adaptability to downstream tasks.\\n\\nI understand that many factors, such as optimization dynamics and task-specific structures, influence real-world performance. However, further insights into how LoCA\\u2019s improved reconstruction capabilities contribute to its adaptability across diverse tasks would be valuable. For example:\\n- Could you provide an analysis of the correlation between reconstruction error and downstream task metrics across different tasks or datasets?\\n- Are there specific task properties or characteristics that make lower reconstruction errors more likely to translate into better task performance?\"}", "{\"summary\": \"The paper introduces Location-aware Cosine Adaptation (LoCA), a novel method for fine-tuning large language models (LLMs) and vision models in a parameter-efficient manner. LoCA is based on the inverse Discrete Cosine Transform (iDCT) and optimizes the locations of learnable frequency components. It addresses limitations of previous low-rank adaptation methods by providing greater optimization flexibility and expressivity. Theoretical analysis and empirical observations support the superiority of LoCA over traditional low-rank methods and iDFT-based methods. LoCA dynamically selects the most informative frequency components during training, leading to enhanced parameter efficiency and computational feasibility. The method demonstrates state-of-the-art performance on diverse language and vision tasks with fewer parameters. The introduction of the paper contextualizes the need for parameter-efficient fine-tuning methods due to the prohibitive costs of fully fine-tuning increasingly large models, and LoCA is presented as a solution that maintains performance while reducing trainable parameters.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The concept of applying low-rank adaptation within the Fourier domain is intriguing, and it implicitly suggests a method of tuning that utilizes all available parameters.\\n\\n2. The theoretical results appear to be novel and have captured the interest of the reviewer.\\n\\n3. The proposed method delivers strong performance benefits while maintaining an exceptionally low parameter cost.\", \"weaknesses\": \"The reviewer, not being an expert in this area, has not identified any major weaknesses. However, with some background in empirically tuning LLMs and ViTs, the reviewer would like to inquire further about the experimental setup.\\n\\n1. There lack some benchmarks and baselines. \\n\\n2. Common advantages of the PEFT method include reduced computation and memory costs. The paper's contribution would be strengthened if the authors included these aspects in their analysis.\", \"questions\": \"I will keep my positive score if the authors address Question 1. Other questions require much more experiment time and are quite minor to improve the paper.\\n\\n1. MT-bench is considered an unstable benchmark. It is strongly recommended that the authors utilize the MathInstruct Dataset instead, which is more stable and generally requires a higher level of expressive power.\\n\\n2. For fine-tuning Roberta, typical benchmarks include RTE, BoolQ, SST-2, WSC, WIC, MultiRC, SQuAD, CB, COPA, DROP, GSM8K, and ReCoRD. Could the authors consider adding any benchmarks that are currently missing?\\n\\n3. COLA, ReLoRA, and DoRA represent typical LoRA variants. It would be beneficial if the authors could include any of these variants that are not already covered.\\n\\n4. In Figure 3, it appears that the performance gain may continue to increase with a larger value of 'r.' Could the authors extend the range of 'r' to determine the optimal value that yields the best performance?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your response and clarifications regarding the optimization strategy and implementation details. Your theoretical explanation of the coefficient-first optimization order, supported by its convex nature and the mitigating effect of frequent alternation, is well-reasoned and logical. Additionally, the results in Table 5 demonstrate the stability and advantages of central difference over forward and backward approximations, which I find convincing.\\n\\nI appreciate the detailed rebuttal and the insights it has provided into your work. It has significantly deepened my understanding of the work. I will increase my score to 6.\"}", "{\"title\": \"Thanks for your timely responses\", \"comment\": \"Your responses were very accurate and timely, effectively resolving my confusion. In fact, I am not a reviewer who only cares about experimental results. Your theory is indeed quite elegant and has certain guiding significance for future research in the PEFT field. After your thorough revisions, I have increased my score to 6, as this paper meets the acceptance criteria of ICLR. I hope you continue to produce such high-quality work!\"}", "{\"title\": \"Responese to Reviewer STG6, Part 4\", \"comment\": \"### Comparison and Discussion on VeRA and DoRA (Question 1).\\nThanks for the question. While it may not be feasible to encompass all low-rank methods within a single theorem, as some methods like VeRA are not explicitly designed for reconstruction, we can conduct case-by-case analyses since all low-rank-based methods are inherently bounded in their reconstruction capabilities. In response to this concern, we have expanded our analysis in two ways. First, we have added a detailed discussion in `Appendix O` that examines the reconstruction capabilities of both VeRA and DoRA, considering their unique architectural characteristics and optimization approaches. This analysis provides insights into how these methods relate to our theoretical framework. Second, we have enhanced our experimental evaluation by including VeRA in our main experiments, as shown in `Tables 1` and `Table 2`, which provide comprehensive empirical comparisons across GLUE and E2E NLG benchmarks.\\n\\n### Contradictory Findings to FourierFT (Question 3).\\nThanks for the careful observation. We would like to address these concerns comprehensively:\\n\\n**Theoretical Framework and Empirical Support**:\\nOur claim about the expressivity of random frequency selection is primarily supported by our theoretical analysis, which is further validated through simulation experiments presented in Figures 6 and 7 of `Appendix G`. These simulation experiments across different ranks and dimensionalities demonstrate that randomly selected frequency components consistently yield lower expressivity compared to LoRA under matched parameter budgets.\\n\\n**Experimental Discrepancies**:\\nFor QQP, despite extensive hyperparameter tuning, we were unable to reproduce the reported performance where FourierFT achieves ~91.3% accuracy with n=200 components and approaches 92% with n=12288 (Figure 4 in the FourierFT paper). This observation has been corroborated by several researchers in the PEFT community whom we consulted. To ensure fair comparison, we conducted our own comprehensive experiments using identical experiment setup. Our implementation as well as FourierFT is publicly available in the `Supplementary Materials` for reproducibility and comparison. The apparent contradiction with `Section C.2` of the FourierFT paper has been explained in our response to Weakness 1.\\n\\n**Whether LoCA's improved performance justifies its increase in parameter count**:\\nWhile we acknowledge that FourierFT can achieve competitive performance in certain scenarios, our theoretical and empirical results suggest that careful selection of frequency components, as implemented in LoCA, offers more consistent and robust performance across different tasks and model scales.\\nWe have also included additional experimental results in `Appendix J` comparing training speed and memory usage across different parameter budgets, which demonstrate that LoCA's improved performance justifies its modest increase in parameter count.\\n\\n### Empirical evaluations comparing selective FourierFT and LoCA (Question 4)\\nThanks for the comment. We would like to clarify that our theoretical analysis in Theorem 1 specifically addresses the **expected reconstruction error** under the asymptotic Gaussian condition, rather than task-specific performance metrics. The statement \\\"$W_F^{(3)}$ outperforms $W_F^{(2)}$\\\" specifically refers to the **expected reconstruction error** within our theoretical framework, which is proved mathematically within this theoretical framework.\\n\\nEmpirically comparing different FourierFT selection strategies would not directly validate these theoretical claims, as real-world task performance is influenced by many factors beyond reconstruction error, including optimization dynamics, task-specific structures, and various implementation details. Our goal in line 191 is to establish a theoretical foundation for understanding the expressivity of frequency-domain methods in terms of their ability to approximate weight updates, which we rigorously proved through mathematical analysis.\\n\\nIt is worth noting that the mixed empirical findings actually align with our theoretical framework, since LoCA does outperform FourierFT in average (expected) performance on GLUE and ViT benchmarks.\"}", "{\"summary\": \"This paper proposes LoCA, a method for fine-tuning pre-trained models using frequency-domain adaptation via the inverse Discrete Cosine Transform (iDCT). LoCA focuses on selecting key frequency components to improve the expressivity and efficiency of model adaptation. The theoretical analysis argues that iDCT-based adaptation can match or exceed the effectiveness of low-rank methods. However, the empirical gains over existing methods like LoRA are marginal, especially in vision tasks. LoCA\\u2019s added complexity, due to finite-difference approximations and alternating optimization, may not be fully justified by these modest improvements, potentially limiting its practical appeal.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"This paper introduces LoCA, a frequency-based approach to parameter-efficient fine-tuning that selectively optimizes specific frequency components using the Discrete Cosine Transform. By focusing on significant frequencies within weight matrices, LoCA aims to reduce the parameter count needed for fine-tuning while maintaining model expressivity. This selective frequency adaptation presents a practical alternative to spatial-domain methods like LoRA, providing a new angle on efficient model tuning. The paper\\u2019s theoretical framework, including empirical spectral density and the Central Limit Theorem to analyze expressivity, helps ground LoCA's approach in established statistical methods, adding quality to the work.\", \"weaknesses\": \"\\u25cf This paper appears to provide inadequate empirical support for its theoretical claims. A central claim of the paper is that randomly selected frequencies for FourierFT yield lower expressivity than LoRA; however, this claim lacks direct experimental validation, which is critical to substantiate the theoretical conclusions. For instance, Figure 3 shows mixed results for FourierFT on the FGVC task, while Figure 4 in *Parameter-Efficient Fine-Tuning with Discrete Fourier Transform* by Gao et al. (2024) (arXiv:2405.03003) presents empirical evidence that contradicts this claim by showing that FourierFT achieves higher accuracy than LoRA across multiple GLUE benchmarks. Additionally, Section C.2 on Expressive Ability in the FourierFT paper\\u2019s supplementary material further supports FourierFT\\u2019s superior expressivity. The paper also lacks empirical evaluations for selective FourierFT and LoCA, which would further validate the claims made.\\n\\n\\u25cf The paper omits a comparison with a highly representative spatial-domain PEFT method, VeRA, which focuses on lightweight adaptations in the spatial domain and would serve as a useful benchmark for LoCA's performance.\\n\\n\\u25cf The design of LoCA introduces significant parameter overhead due to the individual optimization of frequency component locations and coefficients for each layer. For example, in a model with \\\\( L = 32 \\\\) layers (e.g., LLaMA-2 7B), LoCA\\u2019s parameter count is approximately 2.82 times that of FourierFT, raising concerns about the scalability and efficiency of LoCA for large-scale models.\\n\\n\\u25cf The theoretical framework assumes asymptotic normality of weight updates, enabling the use of the Central Limit Theorem and empirical spectral density for analyzing expressivity. However, this assumption relies on i.i.d. updates, which may not hold in the context of gradient-driven, correlated weight adjustments inherent in LoRA and LoCA. Given the limited and targeted nature of LoCA\\u2019s updates, the cumulative adjustments may lack the \\u201csum of many independent adjustments\\u201d necessary for CLT to apply reliably. This assumption weakens the robustness of the theoretical claims, as the actual distribution of weight updates is likely far from normal in practical implementations. \\n\\n\\u25cf LoCA\\u2019s dynamic selection of high-magnitude frequency components across epochs may introduce instability during convergence, as the selection of significant frequencies may shift due to changing gradients. This could impact the model\\u2019s ability to achieve stable and consistent updates over time. Furthermore, by focusing solely on high-magnitude frequencies, LoCA risks omitting task-relevant information in lower-magnitude components, potentially limiting its adaptability in tasks requiring finer-grained details.\\n\\n\\u25cf The method also relies on finite-difference approximation to estimate location gradients, which introduces additional computational and memory costs. This overhead may significantly increase CUDA memory requirements, particularly in high-dimensional models or when frequent updates are necessary.\", \"questions\": \"\\u25cf Q1: LoRA may not be the most parameter-efficient approach among spatial-domain PEFT methods. The work by Kopiczko et al. (2023) in \\\"VeRA: Vector-based Random Matrix Adaptation\\\" (arXiv:2310.11454) demonstrates a more parameter-efficient and lightweight alternative, focusing on diagonal matrix adaptations to achieve efficient adaptation without the need for frequency-based transformations. Could the authors clarify whether their proof and theoretical framework apply to VeRA? Additionally, this paper lacks a comparative analysis of VeRA, both theoretically and experimentally. Would the established proof also support an evaluation of DoRA\\u2019s expressivity?\\n\\n\\u25cf Q2: For each layer in LoCA, both frequency component locations and coefficients are optimized individually. This approach appears to introduce a higher number of parameters compared to FourierFT, which selects $n$ locations randomly and shares these locations across all layers. Specifically, FourierFT\\u2019s parameter count is:\\n\\n$2n + n* L = n (L + 2)$\\n\\nwhere $L$ represents the number of layers in the pre-trained model.\\n\\nIn contrast, LoCA introduces $2n$ parameters for each layer\\u2019s and locations, and $n$ for each layer\\u2019s coefficients, resulting in a total parameter count of:\\n\\n$3n \\\\times L$\", \"this_yields_a_parameter_ratio_between_loca_and_fourierft_of\": \"$\\\\frac{3L}{L + 2}$\\n\\n\\nFor example, with LLaMA-2 7B where $L = 32$, LoCA\\u2019s parameter count is approximately 2.82 times that of FourierFT. This raises concerns about parameter efficiency, especially in large models. To clarify whether the additional parameters in LoCA yield proportional benefits, could the authors provide empirical comparisons across various model sizes and tasks, measuring both fine-tuning performance and resource usage (e.g., memory and compute requirements)? Specific metrics, such as performance improvements relative to parameter increase and scaling efficiency on different benchmarks, would help assess whether gains in expressivity or accuracy justify the increased parameter cost.\\n\\n\\u25cf Q3: In lines 191-199, the authors claim that randomly selecting frequencies for FourierFT yields the lowest expressivity, performing worse than LoRA; however, this claim lacks experimental support. For instance, Figure 3 in this paper shows mixed results for FourierFT on the FGVC task, whereas Figure 4 in Gao et al. (2024), *\\\"Parameter-Efficient Fine-Tuning with Discrete Fourier Transform\\\"* (arXiv:2405.03003), presents contrasting findings, particularly on the QQP task in GLUE. In Gao et al., FourierFT consistently outperforms LoRA across GLUE benchmarks, achieving higher accuracy with minimal random spectrum updates and fixed locations across layers. Furthermore, Section C.2 on Expressive Ability in the FourierFT paper\\u2019s supplementary material reinforces FourierFT\\u2019s superior expressivity over LoRA. Could the authors provide empirical comparisons to clarify these discrepancies, ideally across multiple model sizes and tasks, with metrics on fine-tuning performance and resource usage (e.g., memory and computational requirements)? Demonstrating whether the increased parameter count in LoCA yields proportional performance benefits would strengthen the case for its efficiency.\\n\\n\\u25cf Q4: Additionally, the paper lacks empirical evaluations comparing selective FourierFT and LoCA, which would be valuable in validating the theoretical claims. For instance, in line 191, the statement that $W_F(3)$ can outperform \\\\$W_F(2)$ would benefit from empirical results to illustrate how these specific configurations impact performance. Further analyses using different selection strategies within FourierFT would also help substantiate the expressivity claims and clarify the mixed findings observed.\\n\\n\\u25cf Q5: The proof assumes asymptotic normality of incremental weight updates, enabling statistical analysis of expressivity via the Central Limit Theorem and empirical spectral density. However, in LoRA, only a subset of weights is updated through low-rank reparameterization, while frequency-based methods like LoCA further restrict updates to high-amplitude frequency components. Given that these updates are gradient-driven and thus correlated, the i.i.d. assumption essential for CLT may not strictly hold. With limited, targeted updates, the cumulative effect lacks the \\\"sum of many independent adjustments\\\" necessary to ensure asymptotic normality. Could the authors provide further justification for assuming convergence to normality under selective updating, and clarify how potential deviations from i.i.d. behavior may impact expressivity comparisons? It would be helpful if the authors could conduct specific analyses or empirical tests, such as quantifying deviations from normality in the weight updates or performing sensitivity analyses to assess the impact of non-normality on expressivity. \\n\\n\\u25cf Q6: In the alternating optimization strategy, the method first optimizes the coefficients of selected frequency components $\\\\alpha$, before refining their locations for $B_a$ steps. Then, with $\\\\alpha$ fixed, it optimizes the locations $l$ for $B_l$ steps, and finally, the procedure fixes the locations and optimizes only the coefficients $\\\\alpha$ until convergence.\\n\\nCould the authors clarify the rationale behind this specific order of coefficient-first optimization and its impact on stability and convergence? While this separate optimization approach might simplify the process, it may not fully capture the interactions between coefficients and locations, potentially limiting optimality. Have the authors explored an alternative order\\u2014optimizing locations first and then coefficients\\u2014and could they provide insights on how this might affect convergence and final performance?\\n\\nIn the ablation study (lines 480-485), the authors present several variant comparisons, yet they do not include an analysis of this alternative pipeline. Additionally, how are the parameters $B_l$ and $B_s$ selected\\u2014is their choice task-specific? From Table 4, it appears that the V5 variant achieves relatively better results, but this is not consistent with the description of the alternative policy in lines 284-292 and the algorithm in lines 945-963. Could the authors clarify these inconsistencies and provide further justification for the selected optimization order and parameter settings? \\n\\n\\n\\u25cf Q7: Could the authors clarify how LoCA ensures stable convergence given the dynamic selection of specific magnitude(e.g., high-magnitude) frequency components in $\\\\Delta W$ across epochs? Specifically, as top-ranked frequencies may shift due to gradient changes, how does LoCA maintain consistency in updates to avoid potential instability in the training process? Additionally, could the authors explain how the specific frequency components selected in LoCA\\u2014whether high or low frequencies\\u2014consistently contribute to model performance across tasks? Is there a risk that focusing solely on high-magnitude components could lead to loss of task-relevant information in lower-magnitude frequencies, which may carry finer-grained details?\\n\\n\\n\\u25cf Q8: Could the authors clarify the computational and memory overhead associated with estimating location gradients using finite-difference approximation? Specifically, does this approach increase CUDA memory requirements significantly, and if so, how does it impact the overall efficiency of LoCA? Additionally, an analysis of the trade-offs between accuracy and resource usage in this approximation method would be valuable to understand its practical feasibility.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your detailed response and the clarifications regarding the stable convergence of LoCA and its ability to capture task-relevant information. Your explanations of the finite-difference approximation, learning rate ratios, and the intrinsic smoothness of DCT basis functions are theoretically valid and address several of my initial concerns.\"}", "{\"title\": \"Thanks from the Authors\", \"comment\": \"Dear Reviewers and ACs,\\n\\nAs the rebuttal period comes to an end, we sincerely thank all reviewers for their thorough comments and constructive suggestions, which have significantly improved the quality of this paper. We are indeed encouraged by their supportive feedback and will continue pursuing this research direction. We also appreciate the Area Chair's involvement in the discussion. This has been a fruitful discussion that helped shape our work.\\n\\nMany thanks,\\n\\nThe authors\"}", "{\"title\": \"Response to Feedback of Part 3\", \"comment\": \"Thank you for your follow-up questions. We appreciate the opportunity to provide further clarification on these important points.\\n\\n**Stable Convergence with Dynamic Frequency Selection:**\", \"regarding_the_effectiveness_of_finite_difference_approximation_in_capturing_frequency_dynamics\": \"The central difference approximation we use essentially computes the discrete derivative of the loss with respect to location changes in the frequency domain. This approach is theoretically justified because it provides an unbiased estimate of the true gradient in the discrete frequency space. Most importantly, it captures not just the immediate effect of moving a frequency component, but also the interaction effects with neighboring components through the chain rule in backpropagation.\", \"regarding_the_learning_rate_ratio_between_location_and_coefficient_updates\": \"We determine this based on the theoretical properties of the frequency domain. Coefficient updates directly modify the magnitude of contribution from each frequency component. Location updates, however, have a more fundamental effect on the representation. Therefore, we set a small location learning rate to ensure stable adaptation of the frequency basis.\", \"about_the_smoothness_assumption\": \"We believe there may be a misunderstanding here. The smoothness of frequency components is an inherent mathematical property of the DCT basis functions themselves, not a property that depends on how many or which components we select. Each DCT basis function is continuous and differentiable by definition, and selecting a subset of these functions does not change their smooth nature. This is analogous to how selecting certain terms from a Fourier series still results in a smooth function - the smoothness is intrinsic to the basis functions, not to how many we use.\\nFurthermore, our empirical results in Figure 2 demonstrate stable convergence during training, confirming that our selection of frequency components maintains the desired smoothness properties in practice.\\n\\n**Risk of Losing Task-Relevant Information:**\", \"the_relationship_between_magnitude_and_importance_in_our_method_is_more_nuanced_than_simply_selecting_high_magnitude_components\": \"While our theoretical analysis uses magnitude as a proxy for component importance to establish upper bounds on expressivity for these methods, the practical implementation uses gradient-based optimization to determine importance. This is consistent with all other PEFT methods. For instance, LoRA aims to approximate the full fine-tuning matrix using low-rank decomposition, but it is also actually updated through backpropagation. In other words, our frequency-domain component selection is directly controlled by task-relevant gradient signals. The task-relevant information will be preserved. When task-specific optimization demands stronger expressivity, our method can perform better than LoRA and FourierFT, since LoCA has a stronger expressivity by strategically selecting frequency components.\\n\\nRegarding the adaptation to different tasks, the gradient-based optimization automatically adapts to task-specific patterns by strengthening relevant components through training. The alternating optimization strategy allows for exploration of different frequency combinations early in training before settling on task-relevant components.\"}", "{\"title\": \"Revised version of the paper\", \"comment\": \"We sincerely appreciate all reviewers for their thorough reading and valuable feedback on our manuscript. Based on comments, we have uploaded a revised version with the following modifications:\\n\\n1. Replaced _optimization space_ with _hypothesis space_ and _flexible optimization_ with _enhanced expressivity_, see page 2.\\n\\n2. Added Section 5.2 \\\"Natural Language Generation\\\", which includes experimental details and results (Table 2) of fine-tuning GPT-2 models on the E2E dataset, including comparisons with VeRA, see page 7.\\n\\n3. Supplemented hyperparameter settings for E2E dataset in Appendix D (Table 7), see page 18.\\n\\n4. Updated Table 1 to include performance comparisons with VeRA, see page 7.\\n\\n5. Updated Table 4 to show performance comparisons between FourierFT and LoCA under the same parameter budget, see page 9.\\n\\n6. Moved baseline method descriptions to Appendix C, see page 16 and 17.\\n\\n7. Updated Table 10 in Appendix J to compare training speed and memory usage across different methods, datasets, models, and parameter budgets, with corresponding discussions, see page 30.\\n\\n8. Added Appendix O to discuss the reconstruction capabilities of VeRA and DoRA, see page 38.\\n\\nCorresponding changes have been highlighted in the revised paper.\"}", "{\"title\": \"Reviewer Response\", \"comment\": \"I thank the authors for their detailed response, and I wish the authors could always produce such meaningful and high-quality work in the future. Good luck!\"}", "{\"title\": \"Response to Feedback of Part 1\", \"comment\": \"Thank you for this insightful question about the relationship between **reconstruction capability and downstream task performance**. We acknowledge this is a complex issue and would like to offer several key perspectives:\\n\\nFirst, it is important to recognize that task performance is influenced by multiple factors beyond reconstruction capability. These include hyperparameter selection, optimization dynamics, and the inherent randomness in neural network training. In our experiments, we observed that even with identical methods and models, different random seeds or slight variations in hyperparameters can lead to different results, making it challenging to establish a direct, consistent relationship between reconstruction capability and task performance.\\n\\nSecond, there is a widely acknowledged phenomenon in PEFT community where full fine-tuning does not always outperform PEFT methods, despite having complete parameter flexibility. This counter-intuitive finding has been consistently observed across multiple studies and suggests that certain task-specific structures may inherently favor specific parameter-efficient adaptation approaches. This indicates that the relationship between parameter space flexibility and task performance is more complex than it might initially appear.\\n\\nThird, as we acknowledge in `Appendix M`, our current method uses finite-difference approximation to estimate gradients for location optimization. While computationally efficient, this approximation may not always lead to optimal location convergence. This limitation could also contribute to the observed discrepancy between theoretical reconstruction capability and actual task performance.\\n\\nWhile acknowledging these complexities, we argue that reconstruction capability serves as a reasonable proxy metric for evaluating PEFT methods, particularly in the absence of task-specific prior knowledge. Our statement that reconstruction capability may not directly translate to downstream task performance in all scenarios acknowledges this complex relationship while maintaining the value of reconstruction analysis as an important theoretical framework for understanding PEFT methods.\"}", "{\"title\": \"Responese to Reviewer 96ov, Part 1\", \"comment\": \"Thanks for the valuable comments. We address each point raised in the review with detailed responses below.\\n\\n### Limited amount of content on related work (Weakness 1).\\nThanks for the comment. We respectfully disagree with the assessment that our literature coverage is insufficient. Our paper adopts a strategic organization of related work that avoids redundancy while maintaining comprehensiveness. Specifically:\\n\\n* The `Introduction` section provides a thorough overview of PEFT methods, systematically categorizing them into adapter-based methods, prompt-based approaches, partial fine-tuning, and low-rank adaptation methods. We also discuss various LoRA variants, establishing the necessary background and motivation for our work.\\n* The `Related Work` section takes a different angle, focusing on connecting PEFT with matrix compression techniques. This novel perspective allows us to frame PEFT methods through the lens of matrix compression, and draw parallels between low-rank decomposition and frequency-domain compression. Beside, it identifies and addresses a critical gap in PEFT literature regarding theoretical comparisons between these approaches.\\n\\n* Detailed discussions of individual PEFT methods are now presented in `Appendix C`, where we provide details of baseline methods. \\n\\nWe believe this organization is more effective than duplicating method descriptions across multiple sections. Each section serves a distinct purpose: `Introduction` establishes context, `Related Work` provides unique insights through compression perspective, and `Appendix C` offers detailed method descriptions.\\n\\n### Experimental results on more datasets (Weakness 2).\\nThanks for the suggestion. In response to this concern, we have expanded our experimental evaluation in the revised manuscript to include a new section (`Section 5.2`) that examines LoCA's performance on the natural language generation (NLG) task. Specifically, we conduct experiments on the E2E NLG Challenge benchmark using GPT-2 Medium and Large models. The results, presented in `Table 2` in the revised manuscript, demonstrate that LoCA achieves superior performance compared to existing PEFT methods across multiple established metrics, particularly when for the GPT-2 large model. All experimental hyperparameters are reported in `Table 7` of the revised manuscript.\\n\\nOur current experimental framework now comprehensively evaluates LoCA across NLU, NLG, instruction tuning, and computer vision. This diverse evaluation encompasses varying model scales (from RoBERTa-base to LLaMA-13b, and from ViT-base to ViT-large) and tasks of different complexity (from basic classification to open-ended generation). We believe this comprehensive evaluation is sufficient to illustrate the applicability and robustness of LoCA.\"}", "{\"title\": \"Response to Feedback of Part 2\", \"comment\": \"Thank you for your follow-up comments. We appreciate the opportunity to clarify these points:\\n\\n**Regarding the identical distribution assumption:**\\n\\n We would like to clarify that our theoretical analysis in `Section 2` primarily focuses on the full fine-tuning (FF) scenario, where we establish the asymptotic normality of $\\\\Delta W$ yield by FF. This serves as our baseline theoretical framework. For LoCA and other PEFT methods, we then analyze how well they can approximate these normally distributed updates. **In other words, we are not assuming the updates in LoCA (frequency domain coefficients) are identically distributed; rather, we are studying how well LoCA can approximate the identically distributed updates $\\\\Delta W$ that emerge from full fine-tuning.**\\n\\n**On Weak Dependence and CLT Applicability:**\\n\\nWhile we initially analyzed the setting under i.i.d. assumptions for theoretical tractability, we acknowledge that gradient-based optimization introduces dependencies. To address this:\\n\\na) We have conducted extensive empirical testing in `Section 2` (Figure 1) that demonstrates our distribution assumptions remain reasonable approximations even under practical fine-tuning conditions.\\n\\nb) Following classical statistical theory, the CLT's applicability extends beyond strict independence to cases with sufficiently weak dependence. While deriving explicit mixing rates for neural network weight updates presents significant technical challenges, our empirical analyses suggest the dependencies are well within acceptable bounds for CLT applications, as we shown below.\\n\\n**Impact of Deviations from i.i.d. Behavior:**\\n\\nTo directly address your concern about quantifying the impact of departures from i.i.d. behavior, we conducted a systematic analysis using a controlled correlation structure:\\n\\na) Experimental Setup:\\nWe model weight updates as $W^T \\\\sim N_{K^2}(0,\\\\Sigma)$, where $\\\\Sigma = \\\\rho\\\\mathbb{1}\\\\mathbb{1}^T + I_{K^2}$, with $\\\\mathbb{1}=(1,\\\\ldots,1)^T\\\\in\\\\mathbb{R}^{K^2}$. This allows us to precisely control the degree of dependence through the correlation $\\\\rho$.\\n\\nb) Quantitative Results:\\nFor a $300\\\\times 300$ matrix, we identified critical correlation thresholds where **LoRA's reconstruction ability begins to outperform LoCA** with numerical simulation experiments. The experimental results can be found in `Supplementary Materials` as well as the `Appendix P` in the revised manuscript. Specifically,\\n\\nrank $r = 8$: Critical $\\\\rho_c = 0.09$\\n\\nrank $r = 16$: Critical $\\\\rho_c = 0.14$\\n\\nrank $r = 24$: Critical $\\\\rho_c = 0.17$\\n\\nrank $r = 32$: Critical $\\\\rho_c = 0.19$\", \"these_findings_are_significant_because\": \"* The critical correlation thresholds are quite high, indicating our method remains effective under substantial dependencies\\n* The increasing trend of critical $\\\\rho_c$ with rank suggests enhanced robustness in higher-dimensional settings\\n\\n\\nc) Statistical Detection of Correlation:\\nTo validate that these critical correlation levels represent statistically significant departures from independence, we developed a test based on the Marchenko-Pastur (MP) law. The MP Law indicates that the eigenvalues fall within the interval $[\\\\lambda_-,\\\\lambda_+]$. We define a test statistic as:\\n$$\\nT=\\\\dfrac{\\\\sum_{\\\\lambda\\\\notin[\\\\lambda_-,\\\\lambda_+]}\\\\lambda}{\\\\sum\\\\lambda}.\\n$$\\nThrough simulation, we determined that the critical value at the 0.95 significance level is 0.005. The test statistics corresponding to $\\\\rho=0.09,0.14,0.17,0.19$ are $0.086,0.134,0.143,0.146$ respectively, indicating that these values are readily detectable.\\n\\nWe have updated the paper to include these analyses in `Appendix P`, providing a more comprehensive validation of our theoretical claims while acknowledging the practical complexities of deep learning optimization dynamics.\"}", "{\"metareview\": \"This submission introduces LoCA, a novel method for fine-tuning pre-trained models using frequency-domain adaptation via the inverse Discrete Cosine Transform (iDCT). LoCA focuses on selecting key frequency components to enhance both the expressivity and efficiency of model adaptation. Theoretical analysis suggests that iDCT-based adaptation can match or even surpass the effectiveness of low-rank methods. Extensive experimental results across a range of tasks, including NLU, NLG, IFT, and CV, demonstrate that LoCA achieves promising performance. After the rebuttal, 5 out of 6 reviewers gave positive ratings, with only one low-confidence negative rating. The area chair concurs with the majority of reviewers and recommends accepting this submission.\", \"additional_comments_on_reviewer_discussion\": \"Most concerns were addressed during the rebuttal phase, and the authors are encouraged to incorporate these discussions into the final version.\"}", "{\"title\": \"Responese to Reviewer 96ov, Part 2\", \"comment\": \"### Comparisons with other fine-tuning methods (Question 1).\\nThanks for the comment. In the revised manuscript, we have expanded our experimental comparisons to include a broader range of PEFT methods. Specifically, in `Tables 1` and `Table 2`, we compare LoCA not only with LoRA-based methods but also with other competitive approaches, including adapter-based methods and recent techniques such as VeRA and DoRA. These methods represent different paradigms in PEFT and are widely recognized in the field. VeRA and DoRA, in particular, are cutting-edge approaches that have demonstrated strong performance across various tasks. Our comprehensive comparisons show that LoCA achieves competitive or superior performance against these diverse baseline methods, which we believe provides a thorough validation of our approach.\"}", "{\"title\": \"Responese to Reviewer kgMQ, Part 1\", \"comment\": \"Thanks for the thorough review and constructive feedback. We have carefully considered all the comments and address each point in detail below.\\n\\n### Unprecise terminology (Weakness 1).\\nThanks for pointing out this. We have revised the manuscript to use _hypothesis space_ instead of _optimization space_, as it better reflects the set of all possible functions that a model can learn. Similarly, we have replaced _flexible optimization_ with _enhanced expressivity_ to more precisely describe the model's capability to represent diverse solution spaces. These words align better with standard terminology in machine learning literature.\\n\\n### Asymptotic normality of M-estimator (Weakness 2).\\nThanks for the comment. We can explain this from two aspects. From a strict theoretical perspective, a more general version of M-estimator is derived in [R1] (page 47, Lemma 5.10). This generalization applies when both $\\\\Psi(\\\\theta)=0$ and $\\\\Psi_n(\\\\theta)=0$ yield unique solutions $\\\\theta^*$ and $\\\\hat{\\\\theta}_n$ respectively, and $\\\\Psi$ exhibits local monotonicity at point $\\\\theta^*$. Notably, this relaxes the traditional requirement of global monotonicity. Furthermore, the conventional full rank condition is replaced by the pointwise convergence of $\\\\Psi_n(\\\\theta)$ to $\\\\Psi(\\\\theta)$ - a property that is satisfied in our case through the WLLN.\\n\\nHowever, requiring a unique solution is somewhat restrictive in neural networks. When there exists a set of global minimizers for the risk function, the consistency property still holds when we consider $\\\\Theta^*$, defined as the solution set to the equation $\\\\Psi(\\\\theta)=0$. Detailed expression of this result can be found in Theorem 5.14 (page 48) of [R1]. To satisfy the assumptions required by this theorem, we can constrain the parameter matrix by clipping each element to lie within the interval $[-M,M]$, where $M$ is a defined parameter range.\\n\\nTo establish asymptotic normality, apart from the above consistency property ($\\\\hat{\\\\theta}_n \\\\xrightarrow{p} \\\\theta^*$)\\uff0c we need to verify another two conditions: (1) the existence of first and second derivatives, and (2) the finiteness of their corresponding expectations, provided that consistency has been established. These conditions are satisfied under the boundedness assumption. Therefore, we can focus solely on verifying the conditions for consistency (Theorem 5.21 of [R1], page 52).\\n\\nFrom an empirical perspective, the asymptotic normality requires enormous data points and the minimizers of a deep neural network are extremely complex. Therefore we do not try to verify the conditions in theoretical analysis as they are are only sufficient conditions, not necessary conditions. In this work, we regard the asymptotic normality of M-estimators as a commonly used assumption in statistics and machine learning. Then, we shift our focus to the actual data in the experiments and conduct visualization (Figure 1a, Section 2), statistical tests (Figure 1b, Section 2), and ESD analysis (Figure 1c, Section 2) to validate the reasonableness of our assumptions about the data distribution.\\n\\n[R1] Asymptotic Statistics, A.W. van der Vaart, 2000.\"}", "{\"summary\": \"The paper introduces a novel parameter-efficient fine-tuning method, Location-Aware Cosine Adaptation (LoCA), that leverages inverse Discrete Cosine Transform (iDCT) for selectively optimizing frequency-domain components in pre-trained language and vision models. LoCA aims to surpass traditional low-rank adaptations by dynamically choosing informative frequency components, thus balancing parameter efficiency and performance.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. The paper provides a rigorous theoretical comparison between frequency-domain and low-rank adaptation methods, filling a gap in the literature on expressivity and optimization constraints.\\n2. LoCA\\u2019s use of iDCT with dynamic selection of frequency components represents a creative improvement over conventional low-rank methods, particularly for parameter efficiency.\", \"weaknesses\": \"Overall it's a good paper, and I will raise my score if the authors could address my concerns.\\n1. LoCA introduces a computationally complex process with alternating optimization steps and central difference approximation, which could pose practical challenges.\\n2. How does LoCA handle potential noise in frequency component selection, and are there measures in place to stabilize the optimization process?\", \"questions\": \"See weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Responese to Reviewer 3NDS, Part 2\", \"comment\": \"### Why does LoCA not show a significant advantage over FourierFT (Weakness 2 and Question 4)\\nThanks for the thoughtful question. We would like to address this concern from several aspects. \\n\\nOur theoretical analysis aims to provide guidance for designing optimal frequency-domain methods by identifying the optimal locations for individual frequency coefficients. However, the current implementation using finite-difference approximation for location updates may not necessarily converge to the theoretically optimal locations. This limitation has been acknowledged in `Appendix M` and represents an area for future improvement, as discussed in our response to Reviewer 8m9H's Weakness 2.\\n\\nIt is also important to note that superior matrix reconstruction ability does not necessarily translate directly to better task performance. Task performance is influenced by multiple factors, including hyperparameter settings and random seeds. This phenomenon is common in the literature, where FF sometimes underperforms PEFT methods on certain tasks.\\n\\nIn addition, as discussed in `Section 5.5`, our theorem focuses on the _expected_ performance. While specific task structures may favor LoRA or FourierFT in certain instances, these exceptions do not invalidate our theoretical framework since LoCA does outpeform FourierFT in terms of average task performance on GLUE and image classification. \\n\\nMoreover, we argue that the practical value of our work extends beyond performance gains. Our theoretical analysis provides crucial insights into the relationship between low-rank and frequency-domain methods, establishing a foundation for future improvements in frequency-domain PEFT approaches.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"comment\": \"Thank you for your detailed responses and the updates provided in the revised manuscript. I mostly agree with your explanations, particularly regarding the use of reconstruction error as a proxy for downstream task performance. Given the complexities of capturing this relationship\\u2014arising from factors such as model intricacy, task variations, and optimization dynamics\\u2014I find the response reasonable. However, as downstream task performance is ultimately the most critical metric and reconstruction error remains a proxy, further exploration of this relationship could be a valuable direction for future research.\"}", "{\"title\": \"Responese to Reviewer STG6, Part 5\", \"comment\": \"### Optimization Strategy and Implementation Details (Question 6)\\n\\nThanks for the detailed questions. We would like to address each concern as follows.\\n\\nRegarding the order of optimization (coefficients first vs. locations first), we want to emphasize that this is primarily an implementation choice rather than a theoretical necessity. The key insight is that we need a well-defined initialization point for the optimization process. Starting with coefficient optimization allows us to leverage the initial random location assignments to establish a baseline approximation, but alternative orderings are theoretically viable. This is analogous to how the order of parameter updates in coordinate descent methods can be flexible while maintaining convergence properties.\\n\\nAs for the concern about capturing interactions between coefficients and locations, while simultaneous optimization might seem intuitively appealing, it presents significant challenges for convergence guarantees. Joint optimization of locations and coefficients variables often leads to unstable training dynamics and potential convergence issues. Our alternating strategy is inspired by coordinate descent methods, which have well-established convergence properties and have been successfully applied in various optimization scenarios with mixed variable types.\\n\\nRegarding $\\\\mathcal{B}_a$ and $\\\\mathcal{B}_l$, as stated in our implementation details, these are not task-specific parameters but rather empirically determined values for all tasks ($\\\\mathcal{B}_a$ = 10 and $\\\\mathcal{B}_l$ = 20).\\n\\nFinally, we would like to clarify that V5 in our ablation study, which uses backward difference approximation for gradient estimation, is indeed consistent with our description of the alternative policy. As we mentioned in the paper, both forward and backward difference approximations show effectiveness, though their theoretical comparison presents challenges. Our choice of central difference approximation as the default implementation represents a balanced approach, as it potentially provides more stable gradient estimates, though all three variants (forward, backward, and central) are valid implementations within our framework.\"}", "{\"comment\": \"Thanks for the efforts. I will keep my positive score.\"}", "{\"title\": \"Responese to Reviewer STG6, Part 3\", \"comment\": \"### How LoCA ensures stable convergence given the dynamic selection of specific magnitude (Weakness 5 and Question 7, part a).\\nThanks for the insightful concern. Regarding the potential shift of top-ranked frequencies during training, LoCA addresses this challenge through three key mechanisms:\\n\\nFirst, our finite-difference approximation method (Eq. 5) provides reliable gradient estimates for location updates, ensuring that frequency component selection is guided by actual contribution to loss reduction. Second, the alternating optimization schedule ($\\\\mathcal{B}_a$ steps for coefficients, followed by $\\\\mathcal{B}_l$ steps for locations) allows the model to stabilize coefficient updates before adjusting locations, preventing drastic shifts in frequency selection. Third, the learning rate for location parameters is intentionally set to be significantly smaller than that for coefficients, meaning that frequency component locations only shift when there is strong and consistent evidence from a large number of training samples. This conservative update strategy prevents arbitrary or noise-induced location changes. Furthermore, an important property of frequency-domain representation is that adjacent frequency components represent similar plane waves in both direction and magnitude. Therefore, even when location updates occur, the resulting changes to the weight matrix are smooth and continuous rather than abrupt, as nearby frequencies in the DCT spectrum contribute similar patterns to the final weight update. This intrinsic smoothness property of frequency-domain representation, combined with our conservative location update strategy, ensures that the model maintains stable and consistent updates throughout the training process.\\n\\nOur empirical analysis (as shown in Figure 2) shows smooth improvement during training, without the oscillations that would be expected if frequency components were shifting unstably. Our ablation studies (Table 5) demonstrate that this controlled update strategy works well.\\n\\n### Risk of focusing solely on high-magnitude components (Weakness 5 and Question 7, part b).\\nThanks for the comment. Our method is fundamentally based on optimal matrix approximation theory. When operating under limited parameter budgets, the selection of high-magnitude frequency components can provide the mathematically optimal approximation of the weight update matrix in terms of Frobenius norm. It is worth noting that our use of high-magnitude components and high singular values in the theoretical analysis serves only to investigate the optimal reconstruction ability of frequency-domain and low-rank methods, rather than as a practical component selection strategy. LoCA does not explicitly favor high-magnitude components. Instead, as other PEFT methods, LoCA employs gradient-based optimization to identify the most informative components for each specific task, with a higher upper bound ability for reconstruction.\\n\\nFurthermore, our extensive experimental results across diverse tasks (including NLU, NLG, IFT and CV) demonstrate that LoCA does not practically impair task performance. In fact, LoCA consistently achieves comparable or superior performance to existing methods while using fewer parameters, suggesting that our selection strategy effectively captures task-relevant information.\\n\\nWe acknowledge that optimal matrix reconstruction and task performance are not perfectly equivalent. However, our empirical results strongly validate that our theoretically-motivated approach provides a robust and effective strategy for practical applications.\\n\\n### Computational and memory costs of finite-difference approximation (Weakness 6 & Question 8).\\nThanks for the comment. As claimed in `Section 4.3`, our finite-difference approximation for location gradients is computationally efficient. The key insight is that the gradient computation for locations and coefficient share the same intermediate results (specifically, the DCT of $\\\\partial L/\\\\partial \\\\Delta W$), ensuring that their computational complexities are of the same order. We have illustrated in `Appendix I` that the computational complexity of location gradient estimation is asymptotically equivalent to that of coefficient gradient computation. Therefore, during the alternating optimization process, the computation burden is stable.\\n\\nRegarding memory consumption, the storage required for location variables is negligible compared to the base model parameters. In fact, our empirical analysis in `Appendix J` demonstrates that LoCA achieves comparable training speed and lower memory usage than FourierFT. This efficiency advantage stems from LoCA's real-valued computations, whereas FourierFT requires complex arithmetic operations that introduce unnecessary computational overhead when converting to real-valued parameter matrices. These practical benchmarks validate our theoretical analysis and confirm that the proposed method maintains computational and memory feasibility.\"}", "{\"title\": \"Interactive Discussions\", \"comment\": \"Dear Reviewers,\\n\\nThank you for your efforts in reviewing this paper. We highly encourage you to participate in interactive discussions with the authors before November 26, fostering a more dynamic exchange of ideas rather than a one-sided rebuttal.\\n\\nPlease feel free to share your thoughts and engage with the authors at your earliest convenience.\\n\\nThank you for your collaboration.\\n\\nBest regards,\\nICLR 2025 Area Chair\"}", "{\"summary\": \"The paper introduces Location-aware Cosine Adaptation (LoCA), a novel frequency-domain parameter-efficient fine-tuning method for pre-trained LLMs. By leveraging the inverse Discrete Cosine Transform (iDCT) as well as selectively learning components in the frequency domain, LoCA addresses the constraints of the naive low-rank adaptation (LoRA) method.\\nIn a word, LoCA enhances expressiveness while maintaining computational efficiency.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1.As emphasized by the authors, their iDFT-based variants has managed to outperform the expressivity of previous low-rank-based methods.\\n\\n2.Overall, the presentation is clear, supported by rigorous mathematical derivations and extensive experimental results.\", \"weaknesses\": \"1.Some baseline experimental results differ significantly from those in related papers, which may indicate carelessness in the experimental process. Also, more ablation experiments are needed to increase confidence.\\n\\n2.For most datasets, LoCA doesn't show a clear advantage over FourierFT in terms of reducing parameter budget and improving accuracy.\\n\\nPlease see the questions section for more details.\", \"questions\": \"1.Why are the accuracy rates of the baseline methods on the Stanford Cars and FGVC datasets more than 5% higher than those reported in related papers? I mainly compared the experimental results from the FourierFT paper(https://arxiv.org/pdf/2405.03003) and yours, and found that the differences are small on other datasets, but the results on the Stanford Cars and FGVC datasets are significantly beyond normal error margins. I am unsure whether this is due to errors caused by carelessness in the experimental process, or if you used different ViT models compared to theirs. Specifically, the experimental results on the Stanford Cars and FGVC datasets are emphasized in your work, and it is crucial to ensure the precision of these results.\\n\\n2.Why are there so few ablation experiments for FourierFT fine-tuning on ViT? As the most competitive counterpart, additional experimental results for FourierFT 239K and FourierFT 480K after fine-tuning on ViT could be included. After all, LoCA presents results for two different parameter budgets, while FourierFT only provides results for the smallest parameter budget for comparison, which does not meet the fairness requirements of an ablation study.\\n\\n3.What are the differences between LoCA and other methods in terms of Memory Cost and Training Time? You may use charts to illustrate these differences explicitly.\\n\\n4.Why does LoCA not show a significant advantage over FourierFT , on fine-tuning various foundation models such as RoBERTa, LLaMA, and ViT, in terms of reducing parameter budget and improving accuracy? Does this suggest that, while your work is strongly interpretable, it may have limited practical value?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Responese to Reviewer STG6, Part 2\", \"comment\": \"### Reasonability of assumptions and impact of non-i.i.d updates (Weakness 4 & Question 5).\\nThanks for the comment. We would like to address this concern from both theoretical and empirical perspectives:\\n\\n**Theoretical Justification**:\\nRegarding the condition of i.i.d. updates on the weight matrix:\\n\\nThe identical distribution property can be justified by considering the inherent symmetry in parameter matrices - all elements are functionally equivalent in their roles, supporting the assumption of identical distribution. Regarding independence, we feel it is important to elaborate on this key point. While strict independence between parameter updates may not hold due to the nature of gradient-based optimization, our theoretical framework remains valid under substantially weaker conditions. Specifically:\\n\\n* The classical CLT requires i.i.d. conditions primarily for mathematical convenience and clarity of proof. However, as shown in [R1] (page 27, Theorem C, Theorem E.), the asymptotic normality result holds under much weaker dependency conditions. This is particularly relevant to our setting where parameter updates may exhibit weak correlations through backpropagation.\\n\\n* For sequences with weak dependence, such as our parameter updates, the key theoretical results still hold under $l$-mixing conditions [R2]. These mixing conditions essentially require that parameters sufficiently far apart in the network have diminishing correlations - a property that naturally emerges in deep neural networks due to their layered structure and the localized nature of gradient updates.\\n\\nAlso, i.i.d. is a sufficient, but not necessary condition for the WLLN and the CLT.\\n\\n**How potential deviations from i.i.d. behavior may impact expressivity comparisons?** Based on the above justification, while we presented our proof under i.i.d. assumptions for clarity and accessibility, extending it to the more general case of weak dependence is primarily a technical exercise that would add considerable complexity to the presentation without fundamentally changing the conclusions. The core insights and theoretical guarantees remain valid, albeit with more complex mathematical machinery required for the proof.\\n\\n**Empirical Validation**:\\nWe have conducted extensive empirical analyses to validate the asymptotic normality of weight updates. Our hypothesis testing results (Figure 1b) demonstrate consistently high p-values across different layers, providing strong statistical evidence for the normality assumption. The visualization in Figure 1a shows clear alignment between the empirical distribution of weight updates and the fitted Gaussian distribution. The ESD analysis (Figure 1c) further supports our assumptions about the distribution of weight updates.\\n\\nTo further address the concern about sensitivity to non-normality, please refer to the comprehensive empirical validation presented in `Section 2`, particularly the statistical tests that quantify potential deviations from normality in terms of total variation. These results demonstrate that our approach remains effective even under real-world conditions where perfect normality may not hold.\\n\\n[R1] Serfling, Approximation Theorems of Mathematical Statics, John Wiley & Sons, 2009.\\n\\n[R2] Withers, Central limit theorems for dependent variables, Probability Theory and Related Fields, 1981.\"}" ] }
4MWUdp6deL
Learning Code Preference via Synthetic Evolution
[ "Jiawei Liu", "Thanh V Nguyen", "Mingyue Shang", "Hantian Ding", "Xiaopeng Li", "Yu Yu", "Varun Kumar", "Zijian Wang" ]
Large Language Models (LLMs) have recently demonstrated remarkable coding capabilities. However, assessing code generation based on well-formed properties and aligning it with developer preferences remains challenging. In this paper, we explore two key questions under the new challenge of code preference learning: (i) How do we train models to predict meaningful preferences for code? and (ii) How do human and LLM preferences align with verifiable code properties and developer code tastes? To this end, we propose CodeFavor, a framework for training pairwise code preference models from synthetic evolution data, including code commits and code critiques. To evaluate code preferences, we introduce CodePrefBench, a benchmark comprising 1364 rigorously curated code preference tasks to cover three verifiable properties—correctness, efficiency, and security—along with human preference. Our evaluation shows that CodeFavor holistically improves the accuracy of model-based code preferences by up to $28.8$%. Meanwhile, CodeFavor models can match the performance of models with $6\sim 9\times$ more parameters while being $34\times$ more cost-effective. We also rigorously validate the design choices in CodeFavor via a comprehensive set of controlled experiments. Furthermore, we discover the prohibitive costs and limitations of human-based code preference: despite spending 23.4 person-minutes on each task, $15.1\sim 40.3$% of tasks remain unsolved. Compared to model-based preference, human preference tends to be more accurate under the objective of code correctness, while being sub-optimal for non-functional objectives.
[ "Code Generation", "Large Language Model", "Preference Learning", "Evaluation" ]
Reject
https://openreview.net/pdf?id=4MWUdp6deL
https://openreview.net/forum?id=4MWUdp6deL
ICLR.cc/2025/Conference
2025
{ "note_id": [ "w6rOIolhO2", "vuRi4Zi0bg", "vjxGWWOK8g", "vPYG7WmsrK", "t4JFDAKO2O", "rPcUL1cljc", "qledszXVe8", "qdQQtQUnjk", "mDhDdFXmqO", "m0ki10ZbDJ", "kw7VKQeqpx", "kpyfanlD1k", "kPdZTOXILO", "iEJZhBOoJU", "esYmeTXPsX", "dFpQehxr9s", "ZJQWyLWHSq", "WrAZDDbBgr", "V5qtpUu5up", "ShFo0suV6H", "PswUmCHtHl", "NwokOChHzQ", "ELehnrwjEH", "C7byK0hF3Z", "AEdXNJHe0U", "A4xbZMvlXM", "13lKtsG8IA", "0tFflc8Ahl" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "decision", "official_comment", "official_comment", "official_review", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment" ], "note_created": [ 1731840243153, 1732251383565, 1732788286272, 1731845550315, 1730125361587, 1737523968341, 1733175339112, 1732170478271, 1730710055250, 1731839918878, 1734976375714, 1732310059117, 1732753633597, 1732753651253, 1731866108604, 1732309964232, 1732304506450, 1731840496401, 1731840063999, 1730874703580, 1732201854660, 1732753659332, 1732309989116, 1732784451553, 1732275570145, 1731111061638, 1732494500081, 1731840432061 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission9214/Authors" ], [ "ICLR.cc/2025/Conference/Submission9214/Authors" ], [ "ICLR.cc/2025/Conference/Submission9214/Authors" ], [ "ICLR.cc/2025/Conference/Submission9214/Reviewer_cqfq" ], [ "ICLR.cc/2025/Conference/Submission9214/Reviewer_cqfq" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission9214/Authors" ], [ "ICLR.cc/2025/Conference/Submission9214/Reviewer_cqfq" ], [ "ICLR.cc/2025/Conference/Submission9214/Reviewer_zRJn" ], [ "ICLR.cc/2025/Conference/Submission9214/Authors" ], [ "ICLR.cc/2025/Conference/Submission9214/Area_Chair_WTn3" ], [ "ICLR.cc/2025/Conference/Submission9214/Authors" ], [ "ICLR.cc/2025/Conference/Submission9214/Authors" ], [ "ICLR.cc/2025/Conference/Submission9214/Authors" ], [ "ICLR.cc/2025/Conference/Submission9214/Authors" ], [ "ICLR.cc/2025/Conference/Submission9214/Authors" ], [ "ICLR.cc/2025/Conference/Submission9214/Authors" ], [ "ICLR.cc/2025/Conference/Submission9214/Authors" ], [ "ICLR.cc/2025/Conference/Submission9214/Authors" ], [ "ICLR.cc/2025/Conference/Submission9214/Reviewer_cc8R" ], [ "ICLR.cc/2025/Conference/Submission9214/Authors" ], [ "ICLR.cc/2025/Conference/Submission9214/Authors" ], [ "ICLR.cc/2025/Conference/Submission9214/Authors" ], [ "ICLR.cc/2025/Conference/Submission9214/Reviewer_zRJn" ], [ "ICLR.cc/2025/Conference/Submission9214/Reviewer_cqfq" ], [ "ICLR.cc/2025/Conference/Submission9214/Reviewer_4Aq4" ], [ "ICLR.cc/2025/Conference/Submission9214/Authors" ], [ "ICLR.cc/2025/Conference/Submission9214/Authors" ] ], "structured_content_str": [ "{\"comment\": \"> How do the experiments demonstrate that CodeFavor\\u2019s performance gains are due to the framework itself rather than simply distilling knowledge from a stronger LLM (Llama-3-70B-Instruct)?\\n\\nGreat question! Looking at Table 3 \\u2013 the original Llama3-70B-Instruct\\u2019s overall score is 76.1 and our smaller models trained on data partially derived from Llama3-70B-Instruct can achieve a score of 77.7, which is even better than the derived model at a much smaller size. In theory, if we solely distill Llama3-70B-Instruct, it is unlikely to achieve a competitive score within a much smaller model parameter scale.\\n\\nWhy is CodeFavor better than distillation? Please note that instead of solely distilling stronger models, our generated data is also derived from real-world code commits which brings additional information and weak labeling.\\n\\n> Could you provide more details on the inference process for the evaluation results in Table 3? Specifically, how many samples were created for each problem, what temperature was used, and are the results statistically significant?\\n\\nThanks for raising the question! We use greedy decoding (mentioned in Section 3.1) for LLM generation and the prompt is listed in Listing 1 (mentioned in Appendix A.3). We implemented a post-processing method (Appendix A.3) to extract LLMs\\u2019 preferences from their generations. As we use greedy decoding, the result is theoretically deterministic. \\n\\n> Could you elaborate on any aspects that emphasize the novelty of your work compared to previous studies? \\n\\nThanks for raising the concern. We argue that our paper is novel from various perspectives:\\n* **Study-wise**, we invested significant efforts to study human annotation for code at scale and demonstrated various new insights.\\n* **Benchmark-wise**, to our knowledge, CodePrefBench is the first benchmark to evaluate code preferences from models and non-model approaches, covering 4 detailed dimensions. We also provided a line of insights by conducting comprehensive evaluations.\\n* **Technique-wise**, to our knowledge, we are the first work to construct synthetic code preference data, based on which we train competitive models to efficiently predict fine-grained code preference. \\n\\n> \\u2026creating datasets from git commits [1,6] and evolving from sampled code[2,3] are common practices in the field.\\n\\nWhile we greatly appreciate the reviewer for the question and references, we\\u2019d like to point out that our synthetic data generation has completely different purposes and implementations compared to the mentioned prior work. \\n\\n* We focus on **code preference training data**, which is a pair of code snippets where one is better than the other, alongside detailed criteria for comparison. \\n* The mentioned work [2,3] synthesizes coding prompts and solutions, targeting the different applications of **code generation**. Meanwhile, the mentioned work [1,6] focuses on creating evaluation tasks rather than generating training data.\\n* Meanwhile, we use the data (e.g., commits) differently: SWE-Bench [1] uses the pull request (which is also a commit) **partially** by **directly** using added test cases as an oracle. Our technique Commit-Instruct **fully** leverages the whole commit, including the pre-and post-commit code and the commit message, by rephrasing the noisy commits into a format that focuses on the actual change.\\n* Lastly, the mentioned work [6] does not use nor mention git commit at all in their whole paper.\\n\\n> \\u2026synthetic data generation may not fully ensure code correctness\\u2026. \\n\\nWe agree that synthetic data can be low-quality if not well-created! We argue that our technique is empirically robust. This can be exemplified by Figures 3, 4, and 5 in the paper, where the preferred code snippets improve the quality of earlier code.\\n\\nAlso please kindly note that our technique is not \\u201cpure distillation\\u201d and thus does not solely rely on LLM\\u2019s capability. For example, in Commit-Instruct, the synthesized code pair is inherited from code commits \\u2014 empirically the post-commit code improves the pre-commit code; otherwise, they would have been filtered out.\\n\\n> Llama3-70B-Instruct is relatively weak compared to state-of-the-art models\\u2026\\n\\nThanks for pointing this out!\\n\\nPlease kindly note that our method can be applied to any model and in this paper, our choice of the critic model comes from legal reasons (Llama3 is license-friendly). Even though it's not the SOTA model, results still show significant improvements which proves that our method does not have to rely on proprietary models.\"}", "{\"comment\": \"Thanks for the reply! We have updated the revised manuscript! Please let us know if you have further suggestions.\"}", "{\"comment\": \"We thank the reviewer for the reply.\\n\\nWe respectfully disagree with the characterization of our work as involving \\\"minor\\\" adjustments to prior work, as our contributions required significant experimental design, innovation, and human resources. Below, we address specific points raised:\\n\\n## > *\\\"the difference between code generation and code preference using commits seems relatively **minor**\\\"*\", \"we_respectfully_request_clarification\": \"can the reviewer point to any prior work on code generation that utilizes code commits for synthetic data generation in the manner we propose?\", \"we_would_like_to_reiterate_and_expand_upon_points_from_our_earlier_reply_regarding_related_work\": \"* Wizardcoder & Magicoder & Crosscodeeval: **NONE** of these approaches utilize code commits in any form.\\n* Swe-bench: This work does **NOT** involve synthetic code generation; it uses GitHub issues (not commits) to construct evaluation tasks, which serve entirely different purposes and are unrelated to code instruction tuning.\\n\\nGiven these distinctions, we argue that using code commits for synthetic data generation is a novel contribution. Furthermore, our application of this methodology to a new domain reinforces its originality.\\n\\n\\n## > *\\\"as do the corresponding improvements in model performance.\\\"*\\n\\n## > *\\\"either by showcasing stronger performance improvements\\\"*\\n\\nWe would like to highlight that our approach improved model performance by up to **28%** across **ALL** evaluated models, ranging from 7B to 27B parameters. These results were achieved using fully permissive datasets and teacher models, without relying on proprietary models like GPT-4.\\n\\nTherefore, we don't think 28% is a number of \\\"minor\\\"; for example, the most cited paper, ResNet [1], improved prior work by 28% on the COCO dataset.\\n\\n[1] He et al. \\\"Deep Residual Learning for Image Recognition\\\"\\n\\n\\n## > *\\\"developing a novel synthetic data generation method that addresses key challenges, such as ensuring code functionality or preference correctness, could significantly enhance the contribution.\\\"*\\n\\nWhile we thank the reviewers' thoughts on the new directions in this area, we believe it is important to assess our work based on what we have contributed, rather than what we have not done --- every paper has an infinite amount of \\\"undone\\\", but they still solve something.\", \"please_allow_us_to_remind_the_reviewer_of_our_notable_contributions\": \"1. **Dimension & Benchmark:** We formulated the problem of \\\"code preference learning\\\" and built a benchmark of over a thousand evaluation tasks.\\n2. **Expensive human study:** We employed 18 developers to provide a human study on code preference. We showed a number of interesting findings on when human preference for code can be reliable.\\n3. **Technique:** For this new problem, we provide an effective model training recipe that can improve a given model's code preference accuracy by up to 28%. The technique suggests creating synthetic code preference pairs by weakly supervised data including code commits and strong LLM critiques over weak LLM generations.\\n\\nIf this still cannot address the reviewer's concern, we kindly request specific examples that can show what we have done is not enough or overlaps with prior publications. Thanks!\"}", "{\"comment\": \"Thank you for your response; it has addressed many of my concerns.\\n\\nHowever, one of the baselines you compared is the Logprob **Sum**, which has proven highly inefficient in existing studies (See Figure 7 in https://arxiv.org/pdf/2107.03374) and performs even worse than random. A more reasonable comparison would be using the Logprob **Mean**, as the log prob sum tends to assign different scores to sequences with different lengths. This is also reflected in your reported results: Logprob Sum only achieves around 30% in correctness, significantly lower than random selection. \\n\\nIn addition, what are the inputs for each baseline? Did they use the same prompt as your method, or only the code itself?\"}", "{\"summary\": \"This paper aims to enable LLMs to better assess the quality of two code snippets by constructing a synthetic pairwise code preference dataset. The dataset is built using code commits (with pre- and post-commit versions as contrastive samples) and code critiques (the code snippet improved by a superior LLM as a contrastive sample). The authors have built benchmarks in Correctness, Efficiency, Security, and Human Preference to test the effectiveness of the proposed method.\", \"soundness\": \"2\", \"presentation\": \"4\", \"contribution\": \"2\", \"strengths\": [\"The synthetic dataset construction method (commits and critiques) is technically sound and novel to me.\", \"The authors conducted a comprehensive evaluation of the method. In addition to correctness, which is the focus of many traditional code generation studies, the authors also assess efficiency, security, and human developer preference.\", \"The authors put significant effort into the formatting of images and tables, which enhances the readability of the paper.\"], \"weaknesses\": \"1. My main concern is that the authors overlook many works on code ranking and do not provide any experimental comparison. Many statements, such as \\\"learning code preferences has been *largely under-explored*\\\", \\\"the *first* open recipe to train pairwise code preference models\\\", and \\\"understudied code generation domain\\\", appear to overclaim. To name a few:\\n\\n- A basic, training-free baseline is to compare the mean log probability of two code snippets and select the one with the highest probability, as in [1]. Furthermore, [2] also uses model likelihood for code selection. \\n- Some research also explores training classifiers to choose the better code, as in [3].\\n- The authors did not compare their work with the dataset in [4] mentioned in the Related Work section.\\n- In addition, but less importantly, to better contextualize the paper, some words about recent advances in execution-based code selection [5,6,7,8] would be appreciated. Particularly, [8] also employs a generation task similar to this paper. Considering that the work is recent, this comparison is not necessarily required.\\n\\nSince the authors only reported the performance of the backbone LLMs and lacked empirical comparisons with advanced code selection methods, it is difficult to determine the relative improvement level of this work within related studies.\\n\\n2. Some training details in the paper require further clarification. For instance, does the classifier task operate on the next token in Equation (1)? If so, considering that the label tokens (\\\"A\\\" or \\\"B\\\") and the criterion $c$ are tightly connected without a delimiter or explicit prompt, how does the LLM recognize where the criterion ends to output label tokens?\\n\\n3. Since the authors collected both the training set and testing benchmarks, it's unclear whether they took decontamination steps to prevent test set leakage. If no decontamination was performed, analyzing the potential overlap between the training and test sets would be beneficial.\\n\\n**Minor comments**\\n\\n- The caption for Listing 1 is missing a period at the end.\\n- It would be better to place Equation (2) right after line 141.\\n\\n**References**\\n\\n[1] Evaluating Large Language Models Trained on Code, https://arxiv.org/abs/2107.03374\\n\\n[2] Coder Reviewer Reranking for Code Generation, ICML 2023.\\n\\n[3] Fault-Aware Neural Code Rankers, NeurIPS 2022.\\n\\n[4] CodeUltraFeedback: An LLM-as-a-Judge Dataset for Aligning Large Language Models to Coding Preferences.\\n\\n[5] CodeT: Code Generation with Generated Tests, ICLR 2023.\\n\\n[6] Natural Language to Code Translation with Execution, EMNLP 2022.\\n\\n[7] B4: Towards Optimal Assessment of Plausible Code Solutions with Plausible Tests, ASE 2024.\\n\\n[8] Sifting through the Chaff: On Utilizing Execution Feedback for Ranking the Generated Code Candidates, ASE 2024.\", \"questions\": \"1. The Security score of human developers in Table 3 is only 59.7. Does this indicate that humans are not proficient at judging code security, even similar to random selection?\\n\\n2. Could you further explain \\u201cScores within 1 percentage point of the highest\\u201d in Table 3, as well as the detailed measurement method for \\\"uncertain responses\\\"?\\n\\n3. The authors discovered that code comments may negatively affect model preferences, which is a bit strange and may be harmful to real-world applications. Is it possible to result from the class imbalance in comments (e.g., a higher proportion of comments in positive examples)? Could you provide the number of comments in positive and negative examples in the training and testing sets?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"Dear Reviewers 4Aq4, cc8R, and zRJn:\\n\\nWe'd like to gently remind you that December 2nd (end of day in AoE) is the last day for reviewer feedback and we can reply to additional questions until tomorrow. Please kindly take a look at our replies and updated manuscript with extensive additional experiments and reference updates, and let us know if your concerns have been addressed.\\n\\nThank you for your time!\"}", "{\"comment\": \"Thanks for the additional baselines and details. When an appropriate revised version of the manuscript is submitted, I tend to raise my score.\"}", "{\"summary\": \"The paper proposes CODEFAVOR, a framework for training pairwise code preference models using synthetic evolution data generated from code commits and LLM critiques. This approach addresses the challenge of aligning code generation with developer preferences, focusing on correctness, efficiency, and security through a benchmark called CODEPREFBENCH, which includes 1,364 preference tasks. CODEFAVOR models achieve comparable performance to much larger models while being more cost-effective, and experiments reveal that human preferences often fall short in non-functional objectives like efficiency and security. The study provides insights into balancing model and human preferences, highlighting the potential limitations and strengths of each approach\\u200b.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"This paper contributes two code preference synthetic dataset and a CODEPREFBENCH, a collection of 1,364 carefully curated preference tasks, To evaluate code preferences labeled by various approaches.\", \"This paper comprehensively quantify and conduct case studies on code preferences derived from human developers and LLMs.\", \"CODEFAVOR models can match the preference accuracy of models that are larger by 6\\u223c9\\u00d7, while being cheaper by 34\\u00d7\"], \"weaknesses\": \"- The approach to synthetic data generation lacks originality, as creating datasets from git commits [1,6] and evolving from sampled code[2,3] are common practices in the field.\\n- The pairwise modeling approach is also not particularly novel; using pairwise prompts, criterion-based prompting, and classification or generation labels [4,5,7] have been previously explored in other studies.\\n- Additionally, there is concern that synthetic data generation may not fully ensure code correctness, as it heavily depends on the LLM used for critique and generation. The chosen model, Llama3-70B-Instruct, is relatively weak compared to state-of-the-art models and limited to only this single model.\\n- Finally, it is challenging to determine whether the performance gains following CODEFAVOR training are due to the distillation of knowledge from stronger LLMs used in data generation or from the CODEFAVOR training itself.\\n\\n\\n1. Jimenez, Carlos E., et al. \\\"Swe-bench: Can language models resolve real-world github issues?.\\\" arXiv preprint arXiv:2310.06770 (2023).\\n2. Luo, Ziyang, et al. \\\"Wizardcoder: Empowering code large language models with evol-instruct.\\\" arXiv preprint arXiv:2306.08568 (2023).\\n3. Wei, Yuxiang, et al. \\\"Magicoder: Empowering code generation with oss-instruct.\\\" Forty-first International Conference on Machine Learning. 2024.\\n4. Dong, Yi, et al. \\\"Steerlm: Attribute conditioned sft as an (user-steerable) alternative to rlhf.\\\" arXiv preprint arXiv:2310.05344 (2023).\\n5. Wang, Zhilin, et al. \\\"Helpsteer: Multi-attribute helpfulness dataset for steerlm.\\\" arXiv preprint arXiv:2311.09528 (2023).\\n6. Ding, Yangruibo, et al. \\\"Crosscodeeval: A diverse and multilingual benchmark for cross-file code completion.\\\" Advances in Neural Information Processing Systems 36 (2024).\\n7. Qin, Zhen, et al. \\\"Large Language Models are Effective Text Rankers with Pairwise Ranking Prompting.\\\" Findings of the Association for Computational Linguistics: NAACL 2024. 2024.\", \"questions\": [\"Do you have concerns that the synthetic data generation methods, Commit-Instruct and Critic-Evol, may not fully ensure code correctness? If not, this raises another question: the quality of synthetic data is highly dependent on the LLM used to generate it. How do the experiments demonstrate that CODEFAVOR\\u2019s performance gains are due to the framework itself rather than simply distilling knowledge from a stronger LLM (Llama-3-70B-Instruct)?\", \"Could you provide more details on the inference process for the evaluation results in Table 3? Specifically, how many samples were created for each problem, what temperature was used, and are the results statistically significant?\", \"Could you elaborate on any aspects that emphasize the novelty of your work compared to previous studies?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"> what are the backgrounds of the developers participating in the evaluation and data annotation, and how their potential biases may have affected the soundness of the approach and the evaluation?\\n\\nThanks for the question! Section 3.2 provides a detailed description of the annotators\\u2019 backgrounds: \\u201cOur annotation team consists of 18 software developers, two-thirds of which hold degrees in computer science, and 95% of them have over two years of programming experience. For Python proficiency, 43% of them self-rate as advanced, while the rest consider themselves middle-level.\\u201d\\n\\nSpecifically, we mitigate potential individual biases with our best efforts:\\n\\n1. We engage with as many as 18 developers\\n2. We have each task annotated by three different developers through major voting\\n3. We comprehensively evaluate over a thousand tasks\\n\\nNotably, to ensure the quality of human evaluation, we iteratively refined the annotation guidelines, carefully communicated with the annotators, and performed post-annotation inspection.\\n\\n> \\u2026the larger LLMs have clear edges over the CodeFavor-improved small models... Why would they not be a better option than using CodeFavor to improve the smaller models?\\n\\nUsers should always use CodeFavor models in place of the raw model as CodeFavor can improve the preference accuracy of both small and large models. \\n\\n*Our existing experiments have shown that it achieves up to 28% improvement for 8-12B models. \\n* Additionally, with our best computing budget, we apply CodeFavor to Gemma2-27B-IT (data mixture), where an improvement is also observed by as much as 9%:\\n\\n\\n| | Correctness | Efficiency | Security | Avg. |\\n| :---- | :---- | :---- | :---- | :---- |\\n| Gemma2-27B-IT (Baseline) | 55.4 | 78.4 | 80.8 | 71.5 |\\n| Gemma2-27B-IT \\\\+ CodeFavor (Classification) | 65.6 | 73.0 | 96.1 | 78.2 |\\n\\nWe initially focused on small models as they are more efficient to train and deploy.\\n\\n> The paper mentions that the reliability of using large language models (LLMs) as evaluators often hinges on their reasoning capabilities\\u2026 A more thorough examination of potential biases and their impact on the findings would strengthen the paper's arguments.\\n\\nThoroughly studying the LLM evaluators\\u2019 bias is definitely important! \\n\\nThat is exactly why in Appendix (A.4) we put nearly 10 pages of case studies and summaries to understand the preference and bias patterns of evaluated models. For example, Gemini 1.5 Pro seems reluctant to sharpen its preference for security-related code snippets. We hope our analysis can provide insights and references for future research.\\n\\n> \\u2026 the prohibitive costs and limitations of human-based code preference assessments \\u2026 suggests that human evaluators may struggle with certain tasks... The paper could benefit from a more in-depth exploration of these limitations and their implications for the overall findings.\\n\\nThanks for the suggestion! At a high level, our paper explores human-based code preference in various dimensions (Section 3.2), including expertise, confidence, overhead, etc. Additionally, at a lower level, in Appendix A.4, we provide 10 case studies that compare model and human behaviors in detail.\", \"overall_we_found\": \"1. Human preference for code is expensive and slow\\n2. Human preference strongly aligns with **code correctness** but can struggle with **non-functional properties**\\n3. The 2nd conclusion can come from the fact that generalist developers are familiar with writing test cases to validate code but they might not have enough background in code optimization and code security, suggesting that domain experts or LLM assistance are generally needed when assessing non-functional code properties\\n\\n> in what scenarios would these two versions be available?\\n\\nGreat question! Accurate code preference can be applied in various scenarios including (i) quality assurance of code commits, where we detect if post-commit code is better than pre-commit one; (ii) inference-time sample ranking; (iii) training reward models (i.e., our experimental setting); and (iv) provide data and signal for preference optimization algorithms such as DPO.\\n\\n> While synthetic data can be useful, it may not fully capture the complexities and nuances of real-world coding scenarios.\\n\\nThanks for the comment! Maintaining the real-world complexities in synthetic data is definitely important! That is exactly why we specifically leveraged weak supervision in data generation rather than performing pure distillation:\\n\\n* In Commit-Instruct, the synthesized data comes from rephrasing diverse, **real-world** commits collected and cleaned carefully from GitHub to make sure the training data reflects real-world code usage.\\n* Beyond human-created commits, we also want to cover code that is directly generated by LLMs, given that nowadays lots of GitHub code can be partially generated by LLMs. As such, Critic-Evol constructs code preference pairs by capturing and fixing code defects generated by smaller models.\"}", "{\"metareview\": \"This paper is on the problem space of code generation using LLMs with a focus on aligning the code with developer preferences. The paper develops an approach for training pairwise code preference models using synthetic data and introduces a benchmark that considers correctness, efficiency, and security along with human preferences. Experimental evaluation shows improvement in accuracy of code preference model.\\n\\nThe reviewers' appreciated the importance of this research problem and contributions on both benchmark and preference model. They also asked a number of questions ranging from motivation to problem formulation and experimental methodology. Some of the questions were answered by the author rebuttal. There are two important outstanding concerns that were raised by Reviewer 4Aq4 which were not addressed.\\n1. Motivation is not strong and/or unclear.\\n2. The realistic use-case of this study from a software developers' point of view.\\n\\nTherefore, I'm recommending to reject this paper and strongly encourage the authors' to improve the paper based on the feedback from reviewers' for resubmission.\", \"additional_comments_on_reviewer_discussion\": \"Summarized in the meta review.\"}", "{\"title\": \"Gentle reminder \\ud83e\\udd17\", \"comment\": \"Dear reviewer,\\n\\nThanks for your helpful questions and comments! We look forward to hearing your feedback and please do not hesitate to let us know if you have additional questions or concerns!\\n\\nCheers\"}", "{\"title\": \"Reminder\", \"comment\": \"Dear Reviewer,\\n\\nAs the discussion period deadline approaches, we would appreciate your feedback on our response, new results, and revised manuscripts.\\n\\nYour feedback would greatly contribute to our work and the ICLR community!\"}", "{\"title\": \"Reminder\", \"comment\": \"Dear Reviewer,\\n\\nAs the discussion period deadline approaches, we would appreciate your feedback on our response, new results, and revised manuscripts.\\n\\nYour feedback would greatly contribute to our work and the ICLR community!\"}", "{\"comment\": \"Thanks for the prompt response!\\n\\n> A more reasonable comparison would be using the Logprob Mean\\n\\nThanks for the note! This is very helpful and we added the Logprob Mean results:\\n\\n| | Correctness | Efficiency | Security | Avg |\\n| :---- | :---- | :---- | :---- | :---- |\\n| Logprob **Mean** (Llama 3.1 8B) | 32.4 | 38.4 | 59.9 | 43.6 |\\n| Logprob Sum (Llama 3.1 8B) | 31.7 | 60.8 | 55.6 | 49.3 |\\n| (Best 8B Neural Scorer) Skywork-Reward-Llama-3.1-8B-v0.2 | 56.2 | 64.2 | 61.4 | 60.6 |\\n| (Ours as a reference)\\nCodeFavor Llama-3-8B Classifier | 58.0 | 73.0 | 95.2 | 75.4 |\\n\\nOverall we saw the Logprob mean result is similarly random and even leads to a worse overall result compared to Logprob Sum (probably just due to randomness). Note the results are computed by averaging logprob of all tokens (including prompt + response). If we mask prompts, and only consider response logprobs we get the following results, which are slightly better yet still somewhat random.\\n\\n| | Correctness | Efficiency | Security | Avg |\\n| :---- | :---- | :---- | :---- | :---- |\\n| Logprob **Mean** (Llama 3.1 8B) | 32.6 | 47.7 | 59.4 | 46.6 | |\\n\\n> what are the inputs for each baseline?\\n\\nThanks for the question!\\n\\n* Logprob: we compute the score for each (prompt + response) applied with the corresponding model chat template. As such, we have a logprob-based score for each response (they share the same prompt) and we choose the response with the highest score.\\n* Skywork-Reward-Llama-3.1-8B-v0.2: Similarly to Logprob, we use (prompt + response) applied with the corresponding model chat template as the model's input -- getting two scores, selecting the best response.\\n\\nFor a more detailed reference, below is the code to construct prompts for given a response pair:\\n\\n```python\\n prompts = [\\n self.tokenizer.apply_chat_template(\\n [\\n {\\\"role\\\": \\\"user\\\", \\\"content\\\": prompt},\\n {\\\"role\\\": \\\"assistant\\\", \\\"content\\\": res},\\n ],\\n tokenize=False,\\n ).replace(self.tokenizer.bos_token, \\\"\\\")\\n for res in [resa, resb]\\n ]\\n```\"}", "{\"title\": \"Gentle reminder \\ud83e\\udd17\", \"comment\": \"Dear reviewer,\\n\\nThanks for your helpful questions and comments! We look forward to hearing your feedback and please do not hesitate to let us know if you have additional questions or concerns!\\n\\nCheers\"}", "{\"title\": \"Thank you!\", \"comment\": \"Thank you for the generous reminder and score update!\\n\\nWe have accordingly adjusted our revision to fit our main text in 10 pages.\\n\\nCheers\"}", "{\"comment\": \"> the authors overlook many works on code ranking and do not provide any experimental comparison\\u2026\\n\\nThanks for bringing up all the related work! Our revision will definitely discuss them and compare ours with those that are applicable.\", \"added_new_baseline_results\": \"| | Correctness | Efficiency | Security | Avg |\\n| :---- | :---- | :---- | :---- | :---- |\\n| Logprob Sum (Mistral Nemo 12B) | 28.6 | 52.0 | 61.4 | 47.3 |\\n| Logprob Sum (Llama 3.1 8B) | 31.7 | 60.8 | 55.6 | 49.3 |\\n| (Best 8B Neural Scorer) Skywork-Reward-Llama-3.1-8B-v0.2 | 56.2 | 64.2 | 61.4 | 60.6 |\\n| (Ours as a reference)\\nCodeFavor Llama-3-8B Classifier | 58.0 | 73.0 | 95.2 | 75.4 |\\n\\n* Decoding logprob/likelihood [1,2]: Following Codex, we compute the logprob sum and select the response with a higher logprob sum value as the preferred snippet. As we can see in the first two rows, the logprob-based approaches overall perform randomly.\\n* Regression-based neural rankers: We note that the \\u201cFault-Aware Neural Code Rankers\\u201d work [3] did not release any model checkpoints. However, there have been various great reward models using similar modeling as Code Rankers (e.g., outputting a score). Therefore, we use \\u201cSkywork-Reward-Llama-3.1-8B-v0.2\\u201d, the **best** scoring-based reward model at its size from the Reward Bench Leaderboard (https://huggingface.co/spaces/allenai/reward-bench) as the baseline. The table above shows that our method is even better than the best general reward model by 24% at 8B among the code domain, even if the compared reward model is trained on a rich set of data generated by proprietary models such as GPT-4 and Claude-3-Opus.\\n\\nWhile we will definitely cite or have already cited the following work, they have been compared or are not applicable for comparison:\\n\\n* CodeUltraFeedback [4] prompts various LLMs to rate code snippets and construct preference pairs \\u2013 in our evaluation, we also directly prompt LLMs as baselines to rank code pairs, which are already intrinsically compared with [4].\\n* As is earlier motivated in the paper, we focus on studying general scenarios, where test execution [5,6,7,8] is not always available (e.g., many code snippets cannot be executed/tested). Therefore, we only evaluate \\u201cstatic-analysis\\u201d based approaches including human baselines and models.\\n\\n\\n\\n> Could you further explain \\u201cScores within 1 percentage point of the highest\\u201d in Table 3, as well as the detailed measurement method for \\\"uncertain responses\\\"?\\n\\nThanks for pointing this out and we will make it more clear in our revision!\\n\\n* \\u201cScores within 1 percentage point of the highest\\u201d: given all results in a list (say `scores`), we highlight the best results by making scores whose value is greater than `max(scores) - 1` bold. This highlighting helps readers quickly identify not just the single best result, but also other approaches that achieved nearly equivalent performance.\\n* \\\"Uncertain responses\\\": A human response is uncertain if all three annotations are \\u201cTie\\u201d (e.g., the \\u201cDeveloper Agreement\\u201d block in Figure 12) or conflicting conclusions exist (Some choose \\u201cA\\u201d; while others choose \\u201cB\\u201d). A model response is uncertain when its response cannot be inferred as one response is better than the other regarding the mentioned criteria, for example, the \\u201cGemini 1.5 Pro\\u201d block in Figure 13.\\n\\n\\n> The Security score of human developers in Table 3 is only 59.7. Does this indicate that humans are not proficient at judging code security, even similar to random selection?\\n\\nThanks for the suggestion! The result answers our research question of \\u201cHow good are generalist developers at differentiating secure and insecure code?\\u201d We conclude that they can be uncertain about code security most of the time (i.e., oftentimes \\u201cTie\\u201d; yet this at least means they know when they don\\u2019t know).\\n\\n* Annotator background (Section 3.2): we aim to study general developers\\u2019 code preference accuracy and thus engage with 18 software engineers, most of which have more than two years of programming experience.\\n* Implication: Per the clarified background, the results indicate that generalist human developers might struggle with identifying code security issues in the code pairs we provided (See examples in A.4.3). Yet, 59.7 is still approximately 20% better than random guessing (50).\\n\\n> Some training details in the paper require further clarification.\\n\\nThanks for the question! Indeed we should have provided more context.\\n\\nWe followed the SliC-HF paper by Zhao et al (2023) and started our training based on instruction-tuned models. As such, given a user query $x$, the actual model received prompt will be templated according to the model's chat template and the special tokens in the chat template can split user queries and responses.\\n\\nWe will include the additional explanation in our revision. Thank you!\"}", "{\"comment\": \"> To validate the effectiveness of the developed training framework, might it be helpful to add some baseline training approaches which also train the LLMs using the same training data used by CODEFAVOR?\\n\\nThanks for the question! Please kindly note that our framework is a combination of (i) training data generation; and (ii) model training scheme \\u2013 the effectiveness is demonstrated through the co-design of both components.\\n\\nOur understanding of the reviewer\\u2019s question is to study the (ii) component, i.e., the model training schemes (Please kindly let us know if we misunderstood).\\n\\n* In Table 5 from the paper, we thoroughly studied various training schemes including classification, generation, and model merging.\\n* To further improve the study thoroughness, we follow the RLHF literature and additionally trained a Bradley\\u2013Terry model (Dong et al, 2024) for comparison: Surprisingly, while the overall performance of Bradley-Terry modeling is suboptimal to the classification modeling (4% weaker), its preference accuracy on code correctness beats all evaluated LLMs approaches including Llama-3.1-405B-Instruct.\\n\\n| | Correctness | Efficiency | Security | Avg |\\n| :---- | :---- | :---- | :---- | :---- |\\n| CodeFavor Llama-3-8B Classification | 58.0 | 73.0 | 95.2 | 75.4 |\\n| CodeFavor Llama-3-8b Bradley-Terry | 75.0 | 59.7 | 82.6 | 72.4 |\\n\\nDong et al. RLHF Workflow: From Reward Modeling to Online RLHF. TMLR, 2024.\\n\\n> In Table 1\\u2026 whether CODEFAVOR can further improve these Open-Weight Models with larger sizes?\\n\\nGreat suggestion! With our best computing budget (8xA100@80G), we additionally trained a CodeFavor classifier based on Gemma2-27B-IT with data mixture. \\n\\n| | Correctness | Efficiency | Security | Avg. |\\n| :---- | :---- | :---- | :---- | :---- |\\n| Gemma2-27B (Baseline) | 55.4 | 78.4 | 80.8 | 71.5 |\\n| CodeFavor Gemma2-27B (Classification) | 65.6 | 73.0 | 96.1 | 78.2 |\\n| CodeFavor Gemma2-9B (Classification) | 56.8 | 75.3 | 92.3 | 74.8 |\", \"we_show_that\": \"* Our method can further improve the larger 27B open-weight models by 9.4%.\\n* Meanwhile, compared to the 9B CodeFavor model, the 27B CodeFavor classifier can further improve it by ~5%.\\n\\n> \\u2026 effectiveness \\u2026 when the approach is applied to some other LLMs for code related tasks (e.g., Code Llama, Lemur).\\n\\nThanks for the great suggestion! We extended the new experiments below by training CodeLlama 13B based on CodeFavor and were able to achieve an overall improvement of 16-17%. \\n\\nWe are especially grateful for this suggestion as we found CodeLlama achieved a positive default correctness score without tuning compared to other general models (such as Mistral Nemo 12B) which mostly randomly guess. This might indicate that code instruct models might have a better sense of code correctness in their intrinsic preference.\\n\\n| | Correctness | Efficiency | Security | Avg. |\\n| :---- | :---- | :---- | :---- | :---- |\\n| CodeLlama 13B Instruct (Baseline) | 57.3 | 64.3 | 74.9 | 65.5 |\\n| CodeLlama 13B (CodeFavor Classifier) | 57.7 | 73.3 | 96.6 | 75.9 |\\n| CodeLlama 13B (CodeFavor Generator) | 59.5 | 78.1 | 92.3 | 76.6 |\"}", "{\"summary\": \"The paper proposes CODEFAVOR, a framework for training pairwise code preference models from synthetic evolution data, including code commits and code critiques. To evaluate code preferences, the paper introduces CODEPREFBENCH, a benchmark comprising 1364 rigorously curated code preference tasks to cover three verifiable properties - correctness, efficiency, and security - along with human preference. The evaluation shows that CODEFAVOR holistically improves the accuracy of model-based code preferences.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"(1) The paper is well written and easy to follow.\\n\\n(2) The paper introduces a benchmark which can potentially be used by future papers.\\n\\n(3) The developed approach is evaluated using multiple LLMs, showing that the developed approach is generally effective.\\n\\n(4) The developed approach has good intuitions.\", \"weaknesses\": \"(1) In Table 1, it seems that the approaches in the rows are either LLMs, or LLMs with the training framework developed in this paper. To validate the effectiveness of the developed training framework, might it be helpful to add some baseline training approaches which also train the LLMs using the same training data used by CODEFAVOR?\\n\\n(2) In Table 1, considering that there is still a gap between the Open-Weight Models and Our Models and Baselines (i.e., LLMs used with CODEFAVOR), might it be helpful to understand whether CODEFAVOR can further improve these Open-Weight Models with larger sizes?\\n\\n(3) It might be helpful if the paper can show the effectiveness of the developed approach when the approach is applied to some other LLMs for code related tasks (e.g., Code Llama, Lemur).\", \"questions\": \"(1) In Table 1, to validate the effectiveness of the developed training framework, might it be helpful to add some baseline training approaches which also train the LLMs using the same training data used by CODEFAVOR?\\n\\n(2) In Table 1, might it be helpful to understand whether CODEFAVOR can further improve these Open-Weight Models with larger sizes?\\n\\n(3) Would the developed approach also be effective, if the developed approach is applied to some other LLMs for code related tasks (e.g., Code Llama, Lemur)?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Meta Response & Submission Revision\", \"comment\": [\"We thank the reviewers for the inspiring comments and we have updated our submission by making the following notable changes, highlighted using purple in the updated submission PDF:\", \"**(Experiment) CodeFavor for larger models (Appendix A.6 / Table 9):** We applied CodeFavor to a much larger model, namely Gemma2-27B-IT, showing an overall improvement of 9%. (4Aq4, cc8R)\", \"**(Experiment) Adding Bradley-Terry modeling as a baseline (Appendix A.6 / Table 10):** Using BT modeling, it achieves a suboptimal overall result compared to classification/generation, but it can achieve a substantially better correctness score specifically (cc8R)\", \"**(Experiment) Applying CodeFavor to CodeLlama (Appendix A.6 / Table 11):** CodeFavor improves coding models also quite well; the untuned coding model achieves a better correctness score compared to general models of similar sizes (cc8R)\", \"**(Experiment) Comment distribution in positive & negative samples (Appendix A.6 / Table 12):** The results show that the negative impact of comments does not simply come from distribution imbalance in training/evaluation sets (cqfq)\", \"**(Experiment) Comparing against Logprob Mean & code scoring models (Table 3):** logprob mean leads to rather random results and CodeFavor models largely outperform the leading 8B general reward model (cqfq)\", \"**(Discussion) Acknowledge missed prior work** (zRJn, cqfq)\", \"**(Writing) Clarifying training details (Section 2.1):** prompt and output token are separated by special tokens defined by the chat template (cqfq)\", \"We look forward to hearing further feedbacks from the reviewers!\"]}", "{\"title\": \"Reminder\", \"comment\": \"Dear Reviewer,\\n\\nAs the discussion period deadline approaches, we would appreciate your feedback on our response, new results, and revised manuscripts.\\n\\nYour feedback would greatly contribute to our work and the ICLR community!\"}", "{\"title\": \"Gentle reminder \\ud83e\\udd17\", \"comment\": \"Dear reviewer,\\n\\nThanks for your helpful questions and comments! We look forward to hearing your feedback and please do not hesitate to let us know if you have additional questions or concerns!\\n\\nCheers\"}", "{\"comment\": \"I sincerely thank the authors for their efforts in clarifying the contributions and distinctions from prior works. While this is an intriguing new direction, the difference between code generation and code preference using commits seems relatively minor, as do the corresponding improvements in model performance. Furthermore, the proposed synthetic data generation method still appears to be limited to the scope of previously explored synthetic data generation methods.\\n\\nAlthough I will maintain the current score, I hope to see future iterations of this work that better demonstrate the advantages of code preference data over alternative approaches\\u2014either by showcasing stronger performance improvements or by presenting compelling use cases. Additionally, developing a novel synthetic data generation method that addresses key challenges, such as ensuring code functionality or preference correctness, could significantly enhance the contribution.\"}", "{\"comment\": \"Thank you for your revision and I have updated my score to 6. However, I noticed that your paper significantly exceeds the 10-page limit. I am unsure whether this complies with the guidelines.\"}", "{\"summary\": \"The paper addresses the challenge of assessing code generation based on well-formed properties and aligning it with developer preferences, which has proven difficult in the context of Large Language Models (LLMs). To tackle this issue, the authors propose CODEFAVOR, a framework designed to train pairwise code preference models using synthetic evolution data, including code commits and critiques. Additionally, they introduce CODEPREFBENCH, a benchmark consisting of 1364 curated code preference tasks that evaluate three key properties: correctness, efficiency, and security, alongside human preferences. The main results indicate that CODEFAVOR significantly enhances the accuracy of model-based code preferences by up to 28.8%, while also demonstrating that these models can perform comparably to those with 6 to 9 times more parameters, all while being 34 times more cost-effective. Furthermore, the study highlights the limitations of human-based code preference assessments, revealing that a substantial percentage of tasks remain unsolved despite considerable time investment.\", \"soundness\": \"2\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"This paper presents a significant advancement in the field of code preference learning by introducing the CODEFAVOR framework, which innovatively utilizes synthetic evolution data to train models that predict meaningful code preferences. The novelty lies in its dual focus on aligning human and model preferences with verifiable code properties, addressing a critical gap in existing research. Key contributions include the development of CODEPREFBENCH, a comprehensive benchmark with 1364 curated tasks that evaluate code based on correctness, efficiency, and security, thus providing a robust evaluation framework for future studies. The results demonstrate that CODEFAVOR can enhance model accuracy by up to 28.8% while being more cost-effective than larger models, highlighting its practical significance in improving code generation assessments. Additionally, the paper sheds light on the limitations of human-based assessments, emphasizing the need for model-based approaches in evaluating non-functional code properties, which further underscores the importance of the research findings.\", \"weaknesses\": \"The problem formulation/setting can be improved in terms of clarity, motivation, and realism. The framework is proposed to serve code assessment purposes, i.e., judging automatically which version of code generated by a model from a prompt is preferred (i.e. more correct/secure/efficient) between a pair of two versions. The questions are (1) in what scenarios would these two versions be available, and (2) how realistic it is that there are such strong and discriminative contrasts between the two versions (i.e., correct versus wrong, fast versus slow, secure versus vulnerable). In a typical use scenario of LLMs for code generation, developers may feed the LLM with a prompt and get a response. Should they always ask the model for two responses? If so, the cost would double. More importantly, it is probably unlikely that there is a such contrast between the two versions of code generated from the same prompt---e.g., the two versions could be similarly good or bad. Learning preferences in a strongly differentiable pair of two responses does not seem to be realistic. If so, the paper may want to provide supporting evidence that this is the case. Or the problem itself is not motivated convincingly.\\n\\nAnother primary weakness is the heavy reliance on synthetic evolution data for training the CODEFAVOR framework. While synthetic data can be useful, it may not fully capture the complexities and nuances of real-world coding scenarios. This limitation raises concerns about the generalizability of the model's performance in practical applications, as the evaluation may not reflect actual developer preferences or code behavior in diverse environments.\\n\\nThe paper acknowledges the prohibitive costs and limitations of human-based code preference assessments, noting that despite significant time investment, a substantial percentage of tasks remain unsolved (15.1% to 40.3%) . This suggests that human evaluators may struggle with certain tasks, which could undermine the reliability of the human preference data used for comparison. The paper could benefit from a more in-depth exploration of these limitations and their implications for the overall findings.\\n\\nThe paper mentions that the reliability of using large language models (LLMs) as evaluators often hinges on their reasoning capabilities, which can be subject to inherent biases . This raises questions about the objectivity of the model-based preferences derived from LLMs, as biases could skew the results and affect the alignment with human preferences. A more thorough examination of potential biases and their impact on the findings would strengthen the paper's arguments.\\n\\nThe evaluation framework, CODEPREFBENCH, focuses on three specific properties: correctness, efficiency, and security. While these are important aspects, the paper does not justify the choices, among various other quality aspects of code, such as maintainability or readability. Also, it seems that each of these chosen properties is separately considered, yet in real-world scenarios developers need to balance multiple factors at the same time when choosing which code to adopt (e.g., code that is both secure and correct). The interplay among these potentially competing factors is not considered in the approach nor in the evaluation.\", \"questions\": \"Q1: what are the backgrounds of the developers participating in the evaluation and data annotation, and how their potential biases may have affected the soundness of the approach and the evaluation?\", \"q2\": \"the larger LLMs have clear edges over the CodFavor improved small models with much lower costs than human evaluators. Why would they not be a better option than using CodeFavor to improve the smaller models?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Gentle reminder of reviewer responses\", \"comment\": \"Dear Reviewers,\\n\\nAs the discussion period deadline approaches, we would appreciate your feedback on our response. Your feedback would greatly contribute to our work and the ICLR community.\\n\\nThank you for your time and consideration!\"}", "{\"comment\": \"> it's unclear whether they took decontamination steps to prevent test set leakage.\\n\\nThanks for the question! We have carefully quantified the test set contamination in Appendix (A.5):\\n\\n* Figure 16 shows that **only 0.1% to 1.7%** of positive samples in the test-set code pairs can find training-set positive samples with a similarity score above 80. \\n* As a reference, Riddell et al. (2024) show that 50.8% and 63.4% of code samples in the Stack (Li et al., 2023), can reach over 80 similarity scores with ground-truth code samples in MBPP (Austin et al., 2021) and HumanEval (Chen et al., 2021) respectively.\\n\\n> Could you provide the number of comments in positive and negative examples in the training and testing sets?\\n\\nThanks for the question! Please kindly note that our empirical observation was that \\u201cLLM-generated\\u201d comments (not including human comments) may negatively impact LLM\\u2019s preference decision due to self-bias. We also run experiments to count the average number of comments between positive and negative samples:\", \"training_set\": \"The following table lists the comment distribution in the two training sets. It shows that our observation is right \\u2013 even if positive samples overall have a bit more comments, they do not seem to help CodeFavor models achieve better preference accuracy.\\n\\n| Training sets | Avg. Comments in Positive Code Samples | Avg. Comments in Negative Code Samples |\\n| :---- | :---- | :---- |\\n| Commit-Instruct-EditPack | 0.26 | 0.21 |\\n| Critic-Evol-SOSS | 0.01 | 0.01 |\", \"test_set\": \"The following table shows the comment distribution of the **raw data** in our test-set \\u2013 please note that our evaluation setup removes all comments when evaluating a model; therefore, imbalances in the raw data (if any) won\\u2019t impact the evaluation results presented in the paper.\\n\\n| | Avg. Comments in Positive Samples | Avg. Comments in Negative Samples |\\n| :---- | :---- | :---- |\\n| Human Preference | 5.29 | 4.88 |\\n| Code Correctness | 0.09 | 1.10 |\\n| Code Efficiency | 0.0028 | 0.0028 |\\n| Code Security | 0.93 | 0.94 |\\n\\nFor the human preference, efficiency, and security categories, the average amount of comments is balanced. For the correctness category, comments amounts are unbalanced \\u2013 after a closer investigation we found this is because (i) our code samples are generated from both base and instruction-tuned models; (ii) base models perform completion which preserves the docstring from the task description; (iii) instruction-tuned models tend not to repeat the docstring in the task description; (iv) instruction-tuned models are generally stronger than base models, leading to the imbalance.\", \"our_implementation_to_count_the_number_of_comments_is_listed_below\": \"```python\\nimport re\\n\\ndef count_comments(code_snippet):\\n single_line_comment_pattern = r\\\"#.*\\\"\\n multi_line_comment_pattern = r'\\\"\\\"\\\"(.*?)\\\"\\\"\\\"|\\\\'\\\\'\\\\'(.*?)\\\\'\\\\'\\\\''\\n\\n # Find all single-line comments\\n single_line_comments = re.findall(single_line_comment_pattern, code_snippet)\\n # Find all multi-line comments\\n multi_line_comments = re.findall(multi_line_comment_pattern, code_snippet)\\n # Flatten multi-line comments and remove empty strings\\n multi_line_comments = [\\n comment for group in multi_line_comments for comment in group if comment\\n ]\\n return len(single_line_comments) + len(multi_line_comments)\\n```\\n\\n*(Author response to be continued in the next reply)*\"}" ] }
4M0BRyGMnJ
Democratic Training Against Universal Adversarial Perturbations
[ "Bing Sun", "Jun Sun", "Wei Zhao" ]
Despite their advances and success, real-world deep neural networks are known to be vulnerable to adversarial attacks. Universal adversarial perturbation, an input-agnostic attack, poses a serious threat for them to be deployed in security-sensitive systems. In this case, a single universal adversarial perturbation deceives the model on a range of clean inputs without requiring input-specific optimization, which makes it particularly threatening. In this work, we observe that universal adversarial perturbations usually lead to abnormal entropy spectrum in hidden layers, which suggests that the prediction is dominated by a small number of ``feature'' in such cases (rather than democratically by many features). Inspired by this, we propose an efficient yet effective defense method for mitigating UAPs called \emph{Democratic Training} by performing entropy-based model enhancement to suppress the effect of the universal adversarial perturbations in a given model. \emph{Democratic Training} is evaluated with 7 neural networks trained on 5 benchmark datasets and 5 types of state-of-the-art universal adversarial attack methods. The results show that it effectively reduces the attack success rate, improves model robustness and preserves the model accuracy on clean samples.
[ "Neural network adversarial attack; Universal adversarial perturbation; Adversarial attack defense" ]
Accept (Poster)
https://openreview.net/pdf?id=4M0BRyGMnJ
https://openreview.net/forum?id=4M0BRyGMnJ
ICLR.cc/2025/Conference
2025
{ "note_id": [ "vY0cPLzcca", "pbq4Y0Xpo0", "oSlA5yMy7G", "mpUfYmLF39", "jSwxv1kvlX", "hyiuz8WF2i", "heGo8vLI2u", "gmnnTH5L2l", "fEwjVdy6Yr", "deBbvzOp5x", "cOFOAUg8Pf", "cGdCIHNAxu", "ZNIFSA3atK", "XIazKA2WZC", "WWJR8C7Nld", "VuTW2Lc96n", "VgV5z1INoV", "TZgnimLy7w", "S3s6dKh0SY", "PRbnLkYBZH", "MGShKuMZgQ", "LzGF64XBXA", "LIq6WtUFH9", "H808TvFjr8", "5NVRLKSgFI", "1b6K1Hjl52", "0XnkIKTyhf", "0H4g0u9Bso" ], "note_type": [ "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "meta_review", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732687769326, 1732347986085, 1732293785757, 1737523486665, 1732290967838, 1732291513963, 1732292100731, 1732524262816, 1732509190824, 1732347872020, 1732290998653, 1732293476555, 1732647905995, 1732291550241, 1730438865166, 1732361329427, 1732478960095, 1732293290357, 1734520912934, 1730060142481, 1732685868336, 1732294341540, 1732513899111, 1729252116954, 1730530280197, 1732512232539, 1732361826092, 1732499418844 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission2118/Authors" ], [ "ICLR.cc/2025/Conference/Submission2118/Reviewer_xSaF" ], [ "ICLR.cc/2025/Conference/Submission2118/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission2118/Authors" ], [ "ICLR.cc/2025/Conference/Submission2118/Authors" ], [ "ICLR.cc/2025/Conference/Submission2118/Authors" ], [ "ICLR.cc/2025/Conference/Submission2118/Authors" ], [ "ICLR.cc/2025/Conference/Submission2118/Authors" ], [ "ICLR.cc/2025/Conference/Submission2118/Reviewer_xSaF" ], [ "ICLR.cc/2025/Conference/Submission2118/Authors" ], [ "ICLR.cc/2025/Conference/Submission2118/Authors" ], [ "ICLR.cc/2025/Conference/Submission2118/Reviewer_1Bzp" ], [ "ICLR.cc/2025/Conference/Submission2118/Authors" ], [ "ICLR.cc/2025/Conference/Submission2118/Reviewer_1Bzp" ], [ "ICLR.cc/2025/Conference/Submission2118/Authors" ], [ "ICLR.cc/2025/Conference/Submission2118/Reviewer_hAa5" ], [ "ICLR.cc/2025/Conference/Submission2118/Authors" ], [ "ICLR.cc/2025/Conference/Submission2118/Area_Chair_67M6" ], [ "ICLR.cc/2025/Conference/Submission2118/Reviewer_hAa5" ], [ "ICLR.cc/2025/Conference/Submission2118/Authors" ], [ "ICLR.cc/2025/Conference/Submission2118/Authors" ], [ "ICLR.cc/2025/Conference/Submission2118/Authors" ], [ "ICLR.cc/2025/Conference/Submission2118/Reviewer_etKh" ], [ "ICLR.cc/2025/Conference/Submission2118/Reviewer_xSaF" ], [ "ICLR.cc/2025/Conference/Submission2118/Reviewer_hAa5" ], [ "ICLR.cc/2025/Conference/Submission2118/Authors" ], [ "ICLR.cc/2025/Conference/Submission2118/Reviewer_etKh" ] ], "structured_content_str": [ "{\"title\": \"Change Summary on Revised Paper\", \"comment\": \"Dear All,\\n\\nWe deeply appreciate your thorough evaluation and insightful feedback. We have updated our paper accordingly. Below is a summary of the changes:\\n\\n1. Update Abstract: Democratic training is evaluated on one more model, one more benchmark dataset and one more UAP attack method\\n2. Add Section 2.2 Evaluation Metrics: add definitions of SR and AAcc.\\n3. Update Section 2.4 Threat Model: correct descriptions on adversarial capabilities and knowledge\\n4. Update Section 3.2 Entropy Analysis: improve descriptions on empirical study\\n5. Update Section 3.3 Entropy-based Repair: improve descriptions with symbols used in Algorithm 1, Algorithm 2 and Equation 6 properly introduced\\n6. Update Section 4.1 Experimental Setup: new dataset evaluatedL CIFAR-10; update Table 1\\n7. Update RQ1: add experimental results for $NN_7$ trained on CIFAR-10 with widsresnet architecture; update Table 2 and Figure 3\\n8. Update RQ2: add experimental results for UAP attack method SGA; update Table 3\\n9. Update RQ3: add experimental results for TRADES; update Table 4\\n10. Update RQ4: add experimental results for DensePure; update Table 5\\n11. Update Section 5 Related Works: improve background on defense against adversarial attacks\\n12. Add Future Works section in Appendix 7.1\\n13. Update Datasets Used in Our Experiments in Appendix 7.3: add descriptions on CIFAR-10\\n14. Update Adaptive Attacks in Appendix 7.5: add experimental results for advanced adversaries; add Table 8\", \"15\": \"Add Entropy Analysis on Other UAPs in Appendix 7.6: add Figure 4\\n16. Add Non-targeted UAP Attacks in Appendix 7.7: add results for non-targeted UAP defense performance; add Table 9; add Figure 5\\n17. Address comments on formatting\\n\\n\\n\\nThanks!\\n\\nBR.\\nAuthors\"}", "{\"comment\": \"I would not mind the time spent for the defense too much, as most adversarial training methods use massive amounts of data. However, it is important to report your device details if your want to report this point.\"}", "{\"title\": \"Response to Reviewer etKh I\", \"comment\": \"We appreciate your thoughtful review and the constructive feedback provided. Below are our responses to the points you've raised.\", \"response_to_weaknesses\": \"1. Although it is right to say that our method is based on min-max, we would respectfully argue that our observation and method are still novel - as it is based on entry spectrum rather than adversarial attacks. That is, based on our observation that UAP causes abnormal entropy, Democratic Training mitigates the effect of UAPs in general by finetuning a given model with low entropy samples generated on-the-fly. Different from adversarial training which trains a model from scratch, Democratic Training improves the robustness of a given model against UAP attacks more efficiently. Based on our experimental results in RQ3, our low entropy samples are more effective compared with adversarial examples when finetune a given model against UAP attacks. \\nTo further compare Democratic Training and adversarial training, we evaluate TRADES[1] which is a widely recognized adversarial training method on UAP defense and the results are shown below:\\n\\n| | before || after (ours) || after (TRADES) ||\\n| ------ | ----- | ----- | ----- | ----- | ----- | ----- |\\n| target | Aacc. | SR | Aacc. | SR | Aacc. | SR |\\n| 0 | 0.155 | 0.922 | 0.871 | 0.009 | 0.819 | 0.018 |\\n| 1 | 0.144 | 0.927 | 0.864 | 0.016 | 0.816 | 0.004 |\\n| 2 | 0.143 | 0.954 | 0.857 | 0.050 | 0.816 | 0.028 |\\n| 3 | 0.116 | 0.972 | 0.875 | 0.046 | 0.819 | 0.042 |\\n| 4 | 0.148 | 0.938 | 0.874 | 0.027 | 0.818 | 0.015 |\\n| 5 | 0.226 | 0.861 | 0.861 | 0.036 | 0.818 | 0.042 |\\n| 6 | 0.139 | 0.963 | 0.860 | 0.048 | 0.811 | 0.036 |\\n| 7 | 0.143 | 0.942 | 0.849 | 0.045 | 0.812 | 0.010 |\\n| 8 | 0.171 | 0.928 | 0.852 | 0.047 | 0.815 | 0.015 |\\n| 9 | 0.134 | 0.954 | 0.847 | 0.014 | 0.817 | 0.010 |\\n| avg | 0.152 | 0.936 | 0.861 | 0.034 | 0.816 | 0.022 |\\n| | Clean Acc: 0.931 || Clean Acc: 0.901 || Clean Acc: 0.827 ||\\n\\nBased on the above result, both TRADES and Democratic Training are effective in mitigating the effect of UAPs. However, TRADES sacrifices model accuracy for over 10% while Democratic Training keeps the model accuracy high (reduced by 3%). Furthermore, Democratic Training repairs the model within 20min while it takes more than 15hrs for TRADES to train a robust model on the same machine. \\nHence, adversarial training is effective in keeping models robust against UAP attacks but not time efficient. Democratic training finetunes a given model for a few epochs which is much faster and is able to reduce the attack success rate effectively while keeping the model accuracy on clean samples at a high level\\n\\n2. Yes, $L_cce$ represents cross entropy loss and $I_b^{en}$ represents a batch of generated low entropy samples. Thanks for pointing this out and we will improve our presentation in the revised version.\\n\\n3. Thanks for the suggestion and the effort in trying out our approach. We tested our approach on a wideresnet model training on CIFAR10 and below table shows the result. We are glad and preparing to open source our code (https://anonymous.4open.science/r/democratic_training-EB5A/ ).\\n\\n| | before || after ||\\n| ------ | ----- | ----- | ----- | ----- |\\n| target | Aacc. | SR | Aacc. | SR |\\n| 0 | 0.155 | 0.922 | 0.871 | 0.009 |\\n| 1 | 0.144 | 0.927 | 0.864 | 0.016 |\\n| 2 | 0.143 | 0.954 | 0.857 | 0.050 |\\n| 3 | 0.116 | 0.972 | 0.875 | 0.046 |\\n| 4 | 0.148 | 0.938 | 0.874 | 0.027 |\\n| 5 | 0.226 | 0.861 | 0.861 | 0.036 |\\n| 6 | 0.139 | 0.963 | 0.860 | 0.048 |\\n| 7 | 0.143 | 0.942 | 0.849 | 0.045 |\\n| 8 | 0.171 | 0.928 | 0.852 | 0.047 |\\n| 9 | 0.134 | 0.954 | 0.847 | 0.014 |\\n| avg | 0.152 | 0.936 | 0.861 | 0.034 |\\n| Clean Acc. | 0.931 || 0.901 ||\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"Response to Reviewer xSaF\", \"comment\": \"We deeply appreciate your thorough evaluation and insightful feedback. Below, we provide our detailed responses to the points raised.\", \"response_to_weaknesses\": \"1. Thanks for the comments and sorry for the mistake. Indeed, our approach is designed to defend against a strong adversary who can conduct UAP attacks with white-box access to the model. We have modified our threat model in the revised paper accordingly.\\n2. Thanks for the insightful comment. In RQ3, we aim to compare the efficiency of adversarial examples and our low-entropy examples in mitigating the effect of UAPs when finetuning a given model. We are happy to evaluate \\u2018full\\u2019 adversarial training on UAPs and compare the result with ours. Given the time constraints, we have now evaluated TRADES [2] on a wideresent model trained on cifar-10 dataset and compared the result with ours. The results are summarised below.\\n\\n| | before | after (ours) | after (TRADES) |\\n| ------ | ----------------- | ----------------- | ---------------- |\\n| target | Aacc. | SR | Aacc. | SR | Aacc. | SR |\\n| 0 | 0.155 | 0.922 | 0.871 | 0.009 | 0.819 | 0.018 |\\n| 1 | 0.144 | 0.927 | 0.864 | 0.016 | 0.816 | 0.004 |\\n| 2 | 0.143 | 0.954 | 0.857 | 0.050 | 0.816 | 0.028 |\\n| 3 | 0.116 | 0.972 | 0.875 | 0.046 | 0.819 | 0.042 |\\n| 4 | 0.148 | 0.938 | 0.874 | 0.027 | 0.818 | 0.015 |\\n| 5 | 0.226 | 0.861 | 0.861 | 0.036 | 0.818 | 0.042 |\\n| 6 | 0.139 | 0.963 | 0.860 | 0.048 | 0.811 | 0.036 |\\n| 7 | 0.143 | 0.942 | 0.849 | 0.045 | 0.812 | 0.010 |\\n| 8 | 0.171 | 0.928 | 0.852 | 0.047 | 0.815 | 0.015 |\\n| 9 | 0.134 | 0.954 | 0.847 | 0.014 | 0.817 | 0.010 |\\n| avg | 0.152 | 0.936 | 0.861 | 0.034 | 0.816 | 0.022 |\\n| | Clean Acc.: 0.931 | Clean Acc.: 0.901 | Clean Acc: 0.827 |\\n\\nBased on the above result, both TRADES and Democratic Training are effective in mitigating the effect of UAPs. However, TRADES sacrifices model accuracy for over 10% while Democratic Training suffers from much less reduction (3%). Furthermore, Democratic Training repairs the model within 20 min while TRADES takes over 15 hrs to train a robust model on the same machine. Hence, our method is effective in UAP defense and is much more time efficient. \\n\\n3. Thanks for the recommendation and we agree that advanced adversaries may tailor the adversarial examples trying to bypass democratic training. To explore Democratic Training\\u2019s resilience to such attackers, we conduct experiments such that when generating UAP, the attacker further control the change in layer-wise entropy. Based on DF-UAP, the optimisation loss function is modified as $L(i) = (1 - weight) * L_{cce}(i, target) - weight * H_l(i)$, where $i$ represents a clean training inputs, target represents the attack target class and $H_l(i)$ represents the layer-wise entropy loss. We use $H_l(i)$ to control the entropy change caused by the UAP and parameter $weight$ is used to control the importance of $H_l(i)$ over attack success rate. We conduct such advanced attack on all models with $weight$ set to 0.1~0.9 and and all models how similar results. For illustration purpose, results on $NN_1$ are summarised below:\\n\\n| | before || after ||\\n| ----- | ------ | ----- |----- |----- |\\n| alpha | Aacc | SR | Aacc | SR |\\n| 0.0 | 0.118 | 0.764 | 0.619 | 0.001 |\\n| 0.1 | 0.121 | 0.775 | 0.619 | 0.001 |\\n| 0.2 | 0.118 | 0.759 | 0.619 | 0.001 |\\n| 0.3 | 0.128 | 0.764 | 0.613 | 0.001 |\\n| 0.4 | 0.127 | 0.761 | 0.612 | 0.001 |\\n| 0.5 | 0.125 | 0.759 | 0.608 | 0.000 |\\n| 0.6 | 0.141 | 0.745 | 0.618 | 0.000 |\\n| 0.7 | 0.161 | 0.693 | 0.599 | 0.001 |\\n| 0.8 | 0.185 | 0.657 | 0.609 | 0.001 |\\n| 0.9 | 0.207 | 0.568 | 0.628 | 0.000 |\\n\\nBased on above result, increasing en_weight will cause the attack performance to drop, i.e., the attack success rate starts to drop when $weight$ > 0.5 and the attack SR is below 60% when en_weight is set to 0.9. Our defense stays effective across different en_weight settings where the attack SR is recued to <1% for all scenarios. We observe similar results on other models as well. Hence, knowing how Democratic Training enhance the model and control the change in layer-wise entropy during attack process, the adversary is still not able to bypass our defense effectively.\"}", "{\"title\": \"Response to Reviewer 1Bzp\", \"comment\": \"We greatly appreciate your comprehensive review and thoughtful suggestions. Please find our detailed responses to your comments below.\", \"response_to_weekness\": \"1. Thanks for the comments and we have evaluated Democratic Training on adaptative attacks. The results are summarized below and are also available in Appendix 7.4 ADAPTIVE ATTACKS . In summary, although secondary UAP attacks on Democratic Training repaired models can still generate UAPs that successfully fool the models, our defense keeps the secondary attack success rate to a very low level while keeping the adversarial accuracy high.\\n\\n| Model | AAcc | SR |\\n| ----- | ----- | ----- |\\n| NN1 | 0.418 | 0.210 |\\n| NN2 | 0.303 | 0.249 |\\n| NN3 | 0.437 | 0.107 |\\n| NN4 | 0.881 | 0.037 |\\n| NN5 | 0.445 | 0.307 |\\n| NN6 | 0.545 | 0.405 |\\n\\nFurthermore, we consider more advanced adversaries who may tailor the adversarial examples trying to bypass democratic training. To explore Democratic Training\\u2019s resilience to such attackers, we conduct experiments such that when generating UAP, the attacker further control the change in layer-wise entropy. Based on DF-UAP, the optimisation loss function is modified as $L(i) = (1 - weight) * L_{cce}(i, target) - weight * H_l(i)$, where $i$ represents a clean training inputs, target represents the attack target class and $H_l(i)$ represents the layer-wise entropy loss. We use $H_l(i)$ to control the entropy change caused by the UAP and parameter $weight$ is used to control the importance of $H_l(i)$ over attack success rate. We conduct such advanced attack on all models with $weight$ set to 0.1~0.9 and and all models how similar results. For illustration purpose, results on $NN_1$ are summarised below:\\n\\n| | before || after ||\\n| ----- | ------ | ----- |----- |----- |\\n| alpha | Aacc | SR | Aacc | SR |\\n| 0.0 | 0.118 | 0.764 | 0.619 | 0.001 |\\n| 0.1 | 0.121 | 0.775 | 0.619 | 0.001 |\\n| 0.2 | 0.118 | 0.759 | 0.619 | 0.001 |\\n| 0.3 | 0.128 | 0.764 | 0.613 | 0.001 |\\n| 0.4 | 0.127 | 0.761 | 0.612 | 0.001 |\\n| 0.5 | 0.125 | 0.759 | 0.608 | 0.000 |\\n| 0.6 | 0.141 | 0.745 | 0.618 | 0.000 |\\n| 0.7 | 0.161 | 0.693 | 0.599 | 0.001 |\\n| 0.8 | 0.185 | 0.657 | 0.609 | 0.001 |\\n| 0.9 | 0.207 | 0.568 | 0.628 | 0.000 |\\n\\nBased on above result, increasing en_weight will cause the attack performance to drop, i.e., the attack success rate starts to drop when $weight$ > 0.5 and the attack SR is below 60% when en_weight is set to 0.9. Our defense stays effective across different en_weight settings where the attack SR is recued to <1% for all scenarios. We observe similar results on other models as well. Hence, knowing how Democratic Training enhance the model and control the change in layer-wise entropy during attack process, the adversary is still not able to bypass our defense effectively.\\n\\n2. Thanks for the recommendation and we are glad to extend our work to more types of networks in our future work. Sorry to say that due to the time constraint, we haven\\u2019t been able to finish experiments on transformers yet. \\n\\n3. Thanks for the comments and we agree that it might now always be the case that a clean dataset is available. In our problem definition, our aim is to protect a model trained by a third party. In this scenario, a small set of clean data is usually available for testing and validation purposes. We believe such assumption is reasonable in practice and is usually made in existing works on defense against neural network attacks [5,6,7,8,9,10,11]. Having said that, we are glad to explore data-free defense methods against UAPs in our future works (for instance, using a synthetic clean dataset).\"}", "{\"title\": \"Response to Reviewer hAa5 I\", \"comment\": \"Thank you for your thorough review and valuable insights. We have provided detailed responses to your comments below.\", \"response_to_weaknesses\": \"1. Thanks for the comments and we are glad to evaluate more recent UAP attacks. For the time being, we evaluated Democratic Training against SGA [4] published in 2023 and below shows the result:\\n| | before || after ||\\n| ------- | ----- | ----- | ----- | ----- |\\n| model | Aacc | SR | Aacc | SR |\\n| NN1 | 0.133 | 0.722 | 0.592 | 0.005 |\\n| NN2 | 0.067 | 0.806 | 0.415 | 0.096 |\\n| NN3 | 0.147 | 0.641 | 0.510 | 0.011 |\\n| NN4 | 0.034 | 0.999 | 0.904 | 0.009 |\\n| NN5 | 0.107 | 0.798 | 0.743 | 0.020 |\\n| NN6 | 0.227 | 0.776 | 0.812 | 0.034 |\\n| average | 0.119 | 0.790 | 0.663 | 0.029 |\\n\\nThe results show that Democratic Training is able to reduce the attack success rate from 81% to 2.9% and model accuracy is maintained high (>71%), which is consistent with the results reported in the draft. We will add these new results.\"}", "{\"title\": \"Response to reviewer etKh\", \"comment\": \"Thank you for your valuable feedback and for improving your score based on our results and responses. We understand your concerns regarding the reproducibility of our experiments and to address this, we have updated our anonymous source code repo (https://anonymous.4open.science/r/democratic_training-EB5A). We hope this additional resource will clarify any remaining questions and facilitate reproducibility.\\n\\nWe appreciate your thoughtful review and the opportunity to strengthen the transparency and robustness of our work. If you encounter any issues with the code or require further clarification, we would be happy to assist.\"}", "{\"title\": \"Response to questions about non-targeted attack\", \"comment\": \"Thanks for the comment and in this work we report fooling rate (FR) as the attack success rate (SR) following existing works [12, 13]. For non-target attack, fooling rate is used to measure the ratio of samples changing their prediction when UAP is added. Given a test dataset $X$,a target model $f$ and a UAP $\\\\delta$,\", \"for_non_targeted_attack\": \"$SR = nFR = \\\\sum_{x\\\\in X}\\\\frac{|f(x + \\\\delta) \\\\neq f(x)|}{|X|}$\\n\\nFor targeted attack (let $t$ represent the target class, $X_t$ represent the samples with label $t$ in $X$), fooling ratio is used to measure the ratio of samples (not belonging to the target class) classified into the attack target class when UAP is added:\\n\\n$SR = tFR = \\\\sum_{x\\\\in (X - X_t)}\\\\frac{|f(x + \\\\delta) = t|}{|X| - |X_t|}$\\n\\nHence, it is possible that $AAcc + SR \\\\neq 1 $. Sorry about not defining the matrix clearly and we will add the definition in the revised paper accordingly.\", \"reference\": \"[12] Zhang, Chaoning, et al. \\\"Understanding adversarial examples from the mutual influence of images and perturbations.\\\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020\\n\\n[13]Weng, Juanjuan, et al. \\\"Comparative evaluation of recent universal adversarial perturbations in image classification.\\\" Computers & Security 136 (2024): 103576.\"}", "{\"title\": \"Concerns Addressed\", \"comment\": \"Dear Authors,\\n\\nThank you for addressing the issues I mentioned earlier. It looks like you've covered most of them, which is great. If you add these changes to your final paper, I'll be happy to improve my rating. But I still have a few points to bring up:\\n\\n1. **Comparison with TRADES:**\\n 1) Overall, the results look good. \\n 2) I noticed a small thing \\u2013 did you mean to write \\\"advacc\\\" as \\\"SR\\\"? Could you please check this?\\n 3) The time it takes for TRADES and your method to work isn't a big deal since speed isn't usually a concern with white-box defense. Still, it's nice that you mentioned it. If you decide to include this in your paper, it would be helpful to tell us about the computer or device you used.\\n 4) Have you thought about combining your method with adversarial training like TRADES?\\n\\n\\n2. **Adaptive Attack:** Your current test for the adaptive attack doesn't show the entropy values of your examples, which is important to know if the attack is truly adaptive. The results you've given does give us some idea of how well the attack works, though.\"}", "{\"title\": \"Response to Reviewer xSaF II\", \"comment\": \"4. Thanks for the comment. We are glad to evaluate Democratic Training on smaller-scale datasets. Below is the performance on a wideresenet model rained on cifar-10. It can be observed that the results are consistent with what we reported. We will add the details in the draft.\\n\\n| | before || after ||\\n| ---- | ---- | ---- | ---- | ---- | \\n| target | Aacc. | SR | Aacc. | SR |\\n| 0 | 0.155 | 0.922 | 0.871 | 0.009 |\\n| 1 | 0.144 | 0.927 | 0.864 | 0.016 |\\n| 2 | 0.143 | 0.954 | 0.857 | 0.050 |\\n| 3 | 0.116 | 0.972 | 0.875 | 0.046 |\\n| 4 | 0.148 | 0.938 | 0.874 | 0.027 |\\n| 5 | 0.226 | 0.861 | 0.861 | 0.036 |\\n| 6 | 0.139 | 0.963 | 0.860 | 0.048 |\\n| 7 | 0.143 | 0.942 | 0.849 | 0.045 |\\n| 8 | 0.171 | 0.928 | 0.852 | 0.047 |\\n| 9 | 0.134 | 0.954 | 0.847 | 0.014 |\\n| avg | 0.152 | 0.936 | 0.861 | 0.034 |\\n| | Clean Acc.: 0.931 | Clean Acc.: 0.901 |\\n\\n5. Thanks for the comments, we will improve our background section in the revised version. The three existing methods we are comparing with are designed for UAP defenses. We are glad to extend the comparison against more general adversarial defenses. Given the time constraint, we have now managed to compare with two additional two defense methods. That is, we evaluated TRADES against UAPs and compared the result with Democratic Training. Furthermore, we evaluated the performance of DensePure [3] on the wideresenet trained on cifar-10 the the performance comparison is shown below:\\n\\n| | Before || Ours || DensePure||\\n| ------ | ----- | ----- | ----- | ----- | ----- | ----- |\\n| target | Aacc. | SR | Aacc. | SR | Aacc. | SR |\\n| 0 | 0.155 | 0.922 | 0.871 | 0.009 | 0.820 | 0.010 |\\n| 1 | 0.144 | 0.927 | 0.864 | 0.016 | 0.790 | 0.000 |\\n| 2 | 0.143 | 0.954 | 0.857 | 0.050 | 0.790 | 0.010 |\\n| 3 | 0.116 | 0.972 | 0.875 | 0.046 | 0.810 | 0.040 |\\n| 4 | 0.148 | 0.938 | 0.874 | 0.027 | 0.790 | 0.010 |\\n| 5 | 0.226 | 0.861 | 0.861 | 0.036 | 0.810 | 0.010 |\\n| 6 | 0.139 | 0.963 | 0.860 | 0.048 | 0.800 | 0.010 |\\n| 7 | 0.143 | 0.942 | 0.849 | 0.045 | 0.790 | 0.000 |\\n| 8 | 0.171 | 0.928 | 0.852 | 0.047 | 0.820 | 0.010 |\\n| 9 | 0.134 | 0.954 | 0.847 | 0.014 | 0.800 | 0.000 |\\n| avg | 0.152 | 0.936 | 0.861 | 0.034 | 0.802 | 0.010 |\\n| | Clean Acc.: 0.931 || Clean Acc.: 0.901 || Clean Acc: 0.810 ||\\n\\nDensePure is effective in reducing the UAP attack success rate, but model accuracy is reduced by 12%. Democratic Training reduces the attack success rate to 3.4% and model accuracy is maintained high (3% reduction). Furthermore, as DensePure is a technique based on input sample purification, a large overhead is incurred for each inference. Based on our experiment, DensePure introduces about 560s overhead per inference (with the recommended setting).\\n\\n6. We will fix these issues in the revised version accordingly.\\n\\n\\nReferences\\n\\n[1] Costa, J. C., Roxo, T., Proen\\u00e7a, H., & In\\u00e1cio, P. R. (2024). How deep learning sees the world: A survey on adversarial attacks & defenses. IEEE Access.\\n\\n[2] Zhang, Hongyang, et al. \\\"Theoretically principled trade-off between robustness and accuracy.\\\" International conference on machine learning. PMLR, 2019.\\n\\n[3] Chen, Zhongzhu, et al. \\\"DensePure: Understanding Diffusion Models towards Adversarial Robustness.\\\" Workshop on Trustworthy and Socially Responsible Machine Learning, NeurIPS 2022. 2022.\"}", "{\"title\": \"Response to Reviewer hAa5 III\", \"comment\": \"Response to questions:\\n1. In RQ3, we aim to compare the efficiency of adversarial examples and our low-entropy examples in mitigating the effect of UAPs when finetuning a given model. Moreover, we conduct an additional experiment to evaluate adversarial training from scratch over UAP attacks. We compare the UAP defense performance of TRADES[1] (which is widely recognized adversarial training method) and ours on a wideresent model trained on cifar-10 dataset. The result is summarized below:\\n\\n| | before || after (ours) || after (TRADES) ||\\n| ------ | ----- | ----- | ----- | ----- | ----- | ----- |\\n| target | Aacc. | SR | Aacc. | SR | Aacc. | SR |\\n| 0 | 0.155 | 0.922 | 0.871 | 0.009 | 0.819 | 0.018 |\\n| 1 | 0.144 | 0.927 | 0.864 | 0.016 | 0.816 | 0.004 |\\n| 2 | 0.143 | 0.954 | 0.857 | 0.050 | 0.816 | 0.028 |\\n| 3 | 0.116 | 0.972 | 0.875 | 0.046 | 0.819 | 0.042 |\\n| 4 | 0.148 | 0.938 | 0.874 | 0.027 | 0.818 | 0.015 |\\n| 5 | 0.226 | 0.861 | 0.861 | 0.036 | 0.818 | 0.042 |\\n| 6 | 0.139 | 0.963 | 0.860 | 0.048 | 0.811 | 0.036 |\\n| 7 | 0.143 | 0.942 | 0.849 | 0.045 | 0.812 | 0.010 |\\n| 8 | 0.171 | 0.928 | 0.852 | 0.047 | 0.815 | 0.015 |\\n| 9 | 0.134 | 0.954 | 0.847 | 0.014 | 0.817 | 0.010 |\\n| avg | 0.152 | 0.936 | 0.861 | 0.034 | 0.816 | 0.022 |\\n| | Clean Acc: 0.931 || Clean Acc: 0.901 || Clean Acc: 0.827 ||\\n\\nBased on the above result, both TRADES and Democratic Training are effective in mitigating the effect of UAPs. However, TRADES sacrifices model accuracy for over 10% while Democratic Training suffers from much less reduction (3%). Furthermore, Democratic Training repairs the model within 20 min while TRADES takes over 15 hrs to train a robust model on the same machine. Hence, our method is effective in UAP defense and is much more time efficient. \\n\\n2. Thanks for the comments. In this work, we would like to use entropy to characterise how uncertain the model is on the classification of the intermediate features. Higher entropy suggests the features are ambiguous while lower entropy indicates the model is more certain on classifying the features. Based on our empirical study, UAPs will cause the layer-wise entropy to drop, and such lower entropy indicates the model is more certain on its classification at the same layer. Existing work [12] shows that UAPs contain dominant features over original image and we argue that such dominant features cause the layer-wise entropy to drop which dominates the model prediction. We will improve our presentation accordingly in the revised version.\\n\\nReferences\\n\\n[1] Costa, J. C., Roxo, T., Proen\\u00e7a, H., & In\\u00e1cio, P. R. (2024). How deep learning sees the world: A survey on adversarial attacks & defenses. IEEE Access.\\n\\n[12] Zhang, Chaoning, et al. \\\"Understanding adversarial examples from the mutual influence of images and perturbations.\\\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020\"}", "{\"title\": \"Response after rebuttal\", \"comment\": \"Thanks for the response. It solves my concerns. I'd like to update the score.\"}", "{\"title\": \"Response to Reviewer 1Bzp II\", \"comment\": \"Response to questions:\\n1. We are glad to extend our method on ViT and other neural network architectures in our future works. Sorry to say that due to the time constraint, we haven\\u2019t been able to finish experiments on transformers yet. \\n\\n2. Sorry for omitting the information. The time cost of enhancing all models are listed below (which we will add in the draft). \\n\\n| model | time |\\n| ----- | ----- |\\n| NN1 | 8min |\\n| NN2 | 36min |\\n| NN3 | 7min |\\n| NN4 | 18min |\\n| NN5 | 30min |\\n| NN6 | 16min |\\n\\nReferences\\n\\n[5] Shen, Guangyu, et al. \\\"Backdoor scanning for deep neural networks through k-arm optimization.\\\" International Conference on Machine Learning. PMLR, 2021.\\n\\n[6] Tang, Di, et al. \\\"Demon in the variant: Statistical analysis of {DNNs} for robust backdoor contamination detection.\\\" 30th USENIX Security Symposium (USENIX Security 21). 2021.\\n\\n[7] Liu, Yingqi, et al. \\\"Complex backdoor detection by symmetric feature differencing.\\\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022.\\n\\n[8] Wu, Dongxian, and Yisen Wang. \\\"Adversarial neuron pruning purifies backdoored deep models.\\\" Advances in Neural Information Processing Systems 34 (2021): 16913-16925.\\n\\n[9] Ho, Chih-Hui, and Nuno Vasconcelos. \\\"DISCO: Adversarial defense with local implicit functions.\\\" Advances in Neural Information Processing Systems 35 (2022): 23818-23837.\\n\\n[10] Akhtar, Naveed, Jian Liu, and Ajmal Mian. \\\"Defense against universal adversarial perturbations.\\\" Proceedings of the IEEE conference on computer vision and pattern recognition. 2018.\\n\\n[11] Borkar, Tejas, Felix Heide, and Lina Karam. \\\"Defending against universal attacks through selective feature regeneration.\\\" Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2020.\"}", "{\"summary\": \"The paper presents a novel defense method called Democratic Training to mitigate the impact of Universal Adversarial Perturbations (UAPs) on deep neural networks. The authors observed that UAPs lead to an abnormal entropy spectrum in hidden layers, which shows the model's prediction is dominated by a small subset of features. Democratic Training mitigates this issue by increasing entropy to ensure model predictions rely on a wider range of features. The approach was evaluated on multiple models and datasets and was found to be effective in reducing attack success rates while preserving accuracy on clean data.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The use of entropy to reveal the dominance of UAPs and the concept of Democratic Training as a defense mechanism is innovative.\\n2. The method was evaluated across various neural network architectures and benchmark datasets, which strengthens the claim of its general applicability.\\n3. Unlike other defense methods, Democratic Training does not require architectural modifications, which makes it easy to integrate into existing systems\", \"weaknesses\": \"1. The evaluation focused primarily on benchmark datasets and common UAP generation methods. It would be beneficial to see how this approach performs on more sophisticated and adaptive attacks, such as adversarial examples generated in dynamic environments.\\n2. The proposed method mainly works well on CNN. Authors should validate it in more types of networks, such as transformers.\\n3. The method requires access to a small set of clean data for entropy measurement and training, which might not always be practical\", \"questions\": \"1. How about performances of the method on ViT?\\n2. What's the time cost of the method?\", \"flag_for_ethics_review\": \"['Yes, Privacy, security and safety']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to additional comments of reviwer xSaF\", \"comment\": \"Dear reviewer xSaF:\\n\\nThanks for your comments and we will add these changes in our revised paper. If there are any specific areas where additional detail or explanation could further enhance the presentation, we would be happy to address them.\", \"response_to_additional_points\": \"**Comparison with TRADES:**\\n1. Thanks for the positive assessment of our results and we appreciate your recognition of its overall quality. \\n2. Sorry about the mistake, and yes previously we report the advacc after defense for our method and TRADES. Below is the result on both Advacc and SR.\\n\\n| | before || after (ours) || after (TRADES) ||\\n| ------ | ----- | ----- | ----- | ----- | ----- | ----- |\\n | target | Aacc. | SR | Aacc. | SR | Aacc. | SR |\\n| 0 | 0.155 | 0.922 | 0.871 | 0.009 | 0.819 | 0.018 |\\n| 1 | 0.144 | 0.927 | 0.864 | 0.016 | 0.816 | 0.004 |\\n| 2 | 0.143 | 0.954 | 0.857 | 0.050 | 0.816 | 0.028 |\\n| 3 | 0.116 | 0.972 | 0.875 | 0.046 | 0.819 | 0.042 |\\n| 4 | 0.148 | 0.938 | 0.874 | 0.027 | 0.818 | 0.015 |\\n| 5 | 0.226 | 0.861 | 0.861 | 0.036 | 0.818 | 0.042 |\\n| 6 | 0.139 | 0.963 | 0.860 | 0.048 | 0.811 | 0.036 |\\n| 7 | 0.143 | 0.942 | 0.849 | 0.045 | 0.812 | 0.010 |\\n| 8 | 0.171 | 0.928 | 0.852 | 0.047 | 0.815 | 0.015 |\\n| 9 | 0.134 | 0.954 | 0.847 | 0.014 | 0.817 | 0.010 |\\n| avg | 0.152 | 0.936 | 0.861 | 0.034 | 0.816 | 0.022 |\\n| | Clean Acc: 0.931 || Clean Acc: 0.901 || Clean Acc: 0.827 ||\\n\\n3. Thanks for the recommendation and the machine we used to run all reported experiments is with 96-Core 1.4GHz CPU and 60GB system memory with an NVIDIA 24GB RTX 4090 GPU.\\n4. Thanks for the recommendation and we are glad to combine Democratic Training with adversarial training like TRADES. This may further improve the performance of Democratic Training and would likely to help extending Democratic Training to other types of adversarial attacks besides UAPs.\\n\\n**Adaptive Attack**\\n\\nThanks for the comments and below table summarize the mean entropy of samples with and without UAPs generated with different $weight$ settings:\\n| | before ||| after |||\\n| ------ | ----- | ----- | ------- | ----- | ----- | ------- |\\n| weight | Aacc | SR | entropy | Aacc | SR | entropy |\\n| 0.0 | 0.118 | 0.764 | 5.62 | 0.619 | 0.001 | 7.39 |\\n| 0.1 | 0.121 | 0.775 | 6.07 | 0.619 | 0.001 | 7.39 |\\n| 0.2 | 0.118 | 0.759 | 6.29 | 0.619 | 0.001 | 7.40 |\\n| 0.3 | 0.128 | 0.764 | 6.35 | 0.613 | 0.001 | 7.39 |\\n| 0.4 | 0.127 | 0.761 | 6.89 | 0.612 | 0.001 | 7.40 |\\n| 0.5 | 0.125 | 0.759 | 7.07 | 0.608 | 0.000 | 7.39 |\\n| 0.6 | 0.141 | 0.745 | 7.13 | 0.618 | 0.000 | 7.39 |\\n| 0.7 | 0.161 | 0.693 | 7.16 | 0.599 | 0.001 | 7.39 |\\n| 0.8 | 0.185 | 0.657 | 7.33 | 0.609 | 0.001 | 7.40 |\\n| 0.9 | 0.207 | 0.568 | 7.43 | 0.628 | 0.000 | 7.39 |\\n\\n*clean sample entropy is 7.1\\n\\nAs we can see from above result, as $weight$ increases, the entropy value of samples with UAP also increases and the value exceeds that of clean sample when $weight >= 0.6$. It is also observed that the attack success rate starts to drop when $weight$ is greater than 0.6. With our enhanced model, the entropy value of samples with UAP stays high for different $weight$ settings and the attack success rate is kept below 1%. These results show that even when the advanced attacker tries to control the entropy drop when training an UAP, Democratic Training is still able to protect a given model effectively.\"}", "{\"title\": \"Question about non-targeted attack\", \"comment\": \"In the context of non-targeted attacks, why isn\\u2019t the adversarial accuracy complementary to the attack success rate, i.e., why doesn\\u2019t Aacc+SR=1?\"}", "{\"title\": \"Response to Reviewer hAa5 II\", \"comment\": \"2. Thanks for the insightful comments. In this work, we focus on targeted UAP attacks which are both more relevant from an attacker point of view (i.e., so that the attacker can trigger specific target outcomes) and more challenging from a defender point of view. Our empirical study in section 3.2 shows the abnormal entropy spectrum caused by targeted UAPs and Democratic Training is designed accordingly. We have now managed to evaluate Democratic Training on non-targeted attacks as well and the results are summarized below:\\n| | before ||| after |||\\n| ----- | ----- | ----- | ----- | ----- | ----- | ----- |\\n| model | cacc | aacc | SR | cacc | aacc | SR |\\n| NN1 | 0.752 | 0.057 | 0.939 | 0.705 | 0.594 | 0.267 |\\n| NN2 | 0.717 | 0.056 | 0.943 | 0.651 | 0.369 | 0.559 |\\n| NN3 | 0.685 | 0.098 | 0.888 | 0.65 | 0.469 | 0.408 |\\n| NN4 | 0.999 | 0.002 | 0.981 | 0.968 | 0.918 | 0.066 |\\n| NN5 | 0.858 | 0.053 | 0.958 | 0.839 | 0.607 | 0.374 |\\n| NN6 | 0.892 | 0.289 | 0.737 | 0.884 | 0.801 | 0.129 |\\n| avg | 0.817 | 0.093 | 0.908 | 0.783 | 0.626 | 0.301 |\\n\\nAs we can see from the above table, although not designed for non-targeted UAPS, Democratic Training still reduces the attack SR from over 90% to 30% on average. This is indeed not as effective as targeted UAP defense performance and we believe this is due to the different entropy spectrum caused by the two types of UAPs. We also analyzed the entropy spectrum of clean and UAP perturbed samples and no clear separation of the two is observed (we will add the plot to the revised version).\\nHence, although non-targeted UAPs does not cause severe entropy change, enhancing a given model with low-entropy samples still improves the robustness against such perturbations to a certain level. \\n\\n3. Thanks for the comments and the details of each network in Table 4 is shown below:\\n\\n| Setting | Model | SR | Aacc | Delta CACC |\\n| ------------ | ----- | ----- | ----- | ---------- |\\n| Targeted | NN1 | 0.239 | 0.441 | \\\\-0.010 |\\n| | NN2 | 0.397 | 0.099 | \\\\-0.010 |\\n| | NN3 | 0.069 | 0.480 | \\\\-0.096 |\\n| | NN4 | 0.088 | 0.621 | \\\\-0.067 |\\n| | NN5 | 0.018 | 0.622 | \\\\-0.079 |\\n| | NN6 | 0.188 | 0.521 | \\\\-0.359 |\\n| | avg | 0.167 | 0.464 | \\\\-0.104 |\\n| Non-targeted | NN1 | 0.306 | 0.297 | \\\\-0.063 |\\n| | NN2 | 0.655 | 0.271 | \\\\-0.050 |\\n| | NN3 | 0.421 | 0.238 | \\\\-0.160 |\\n| | NN4 | 0.499 | 0.305 | \\\\-0.010 |\\n| | NN5 | 0.428 | 0.216 | \\\\-0.289 |\\n| | NN6 | 0.422 | 0.445 | \\\\-0.435 |\\n| | avg | 0.455 | 0.295 | \\\\-0.168 |\\n| Known UAP | NN1 | 0.000 | 0.554 | \\\\-0.001 |\\n| | NN2 | 0.422 | 0.101 | 0.004 |\\n| | NN3 | 0.128 | 0.414 | \\\\-0.014 |\\n| | NN4 | 0.163 | 0.649 | 0.004 |\\n| | NN5 | 0.005 | 0.750 | 0.004 |\\n| | NN6 | 0.619 | 0.385 | 0.000 |\\n| | avg | 0.223 | 0.476 | 0.000 |\\n\\nFor comparison with existing works. We compare the performance of CFN and FNS against Democratic Training on NN1, NN2 and NN3. Instead of testing different combinations of clr and dr setting, we adopt the recommended value in the original paper. The table below shows the details for each model. For SFR, as the method is not fully open-sourced, we can only evaluate the performance of GoogleNet trained on ImageNet dataset where a pretrained defense model is provided (Table 5 shows the result on GoogleNet).\\n\\n| | before || after (CFN) ||| after (FNS) |||\\n| ----- | ------ | ------ | ----- | ----- | --------- | ----- | ----- | --------- |\\n| model | Aacc | SR | Aacc | SR | Cacc drop | Aacc | SR | Cacc drop |\\n| NN1 | 13.356 | 71.363 | 0.150 | 0.575 | 0.131 | 0.155 | 0.674 | 0.019 |\\n| NN2 | 6.625 | 69.781 | 0.075 | 0.661 | 0.031 | 0.074 | 0.677 | 0.002 |\\n| NN3 | 19.475 | 58.375 | 0.224 | 0.443 | 0.056 | 0.219 | 0.520 | 0.017 |\"}", "{\"metareview\": \"The paper presents a novel defense method called Democratic Training to mitigate the impact of Universal Adversarial Perturbations (UAPs) on deep neural networks. The authors observed that UAPs lead to an abnormal entropy spectrum in hidden layers, which shows the model's prediction is dominated by a small subset of features. Democratic Training mitigates this issue by increasing entropy to ensure model predictions rely on a wider range of features. The approach was evaluated on multiple models and datasets and was found to be effective in reducing attack success rates while preserving accuracy on clean data. There are some merits in this paper. For example, the use of entropy to reveal the dominance of UAPs and the concept of Democratic Training as a defense mechanism is innovative. The method was evaluated across various neural network architectures and benchmark datasets, which strengthens the claim of its general applicability. Unlike other defense methods, Democratic Training does not require architectural modifications, which makes it easy to integrate into existing systems. Moreover, this paper is well-written and easy-to-follow. The paper makes a commendable observation concerning the entropy spectrum in deep neural network layers, which is a significant contribution to the field and forms the basis for the proposed defense mechanism. The efficiency of the proposed democratic training method is noteworthy. It circumvents the need to generate UAPs during training, instead utilizing a limited number of epochs to identify low-entropy examples, which is a resourceful approach. While the reviewers had some concerns about reproducibility, the authors did a particularly good job in their rebuttal. Therefore, all of us have agreed to accept this paper for publication! Please include the additional discussion in the next version.\", \"additional_comments_on_reviewer_discussion\": \"Some reviewers raise the score after the rebuttal.\"}", "{\"summary\": \"To improve neural network robustness to targeted UAPs, this paper proposed an adversarial training-like method that fine-tunes a pretrained model to reduce middle-layer entropy.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The experiments are comprehensive.\\n2. The proposed defense is attack-agnostic which is more practical and efficient.\\n3. The proposed defense largely reduced the targeted attack success rate.\\n\\nI tend to accept this pape. However, since I'm not familiar with UAP attack and defense baseline methods, I will listen to other reviewers and public comments and then decide.\", \"weaknesses\": \"1. UAP attacks evaluated in the paper were published in 2018,2019,2020 and seem out-of-date.\\n2. After democracy training, there is still a gap between ``AAcc.'' and clean accuracy. I wonder about the effectiveness of democracy training against non-targeted UAPs.\\n3. Average results in Table 4\\\\&5 are ambiguous since there can be a large bias among different networks.\", \"questions\": \"1. For RQ3: did you adversarially train a model from scratch or just fine-tune a pretrained model with an adversarial training objective?\\n2. I'm not very convinced by the claim in Lines 239-243 that says middle-layer feature entropy suggests the model's classification confidence.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Reviewer 1Bzp,\\n\\nThanks for your kind feedback and for taking the time to review our responses. We truly appreciate your thoughtful engagement and are glad that our clarifications have addressed your concerns.\"}", "{\"title\": \"Response to Reviewer etKh II\", \"comment\": \"Response to questions:\\n1. Thanks for the comments. Indeed, our initial observation was based on DF-UAP (as described in Section 3.2). However, after we evaluated our approach against multiple UAP attack methods besides DF-UAP, i.e., sPGD, LaVAN and GAP, it becomes apparent that our method indeed works for all the UAP attacks that we tested. In fact, we further evaluated one recent UAP attack SGA [4] and the result of SGA is shown below:\\n\\n| | before || after ||\\n| ------- | ----- | ----- | ----- | ----- |\\n| model | Aacc | SR | Aacc | SR |\\n| NN1 | 0.133 | 0.722 | 0.592 | 0.005 |\\n| NN2 | 0.067 | 0.806 | 0.415 | 0.096 |\\n| NN3 | 0.147 | 0.641 | 0.510 | 0.011 |\\n| NN4 | 0.034 | 0.999 | 0.904 | 0.009 |\\n| NN5 | 0.107 | 0.798 | 0.743 | 0.020 |\\n| NN6 | 0.227 | 0.776 | 0.812 | 0.034 |\\n| average | 0.119 | 0.790 | 0.663 | 0.029 |\\n\\nMoreover, we analyse the layer-wise entropy on clean samples and samples with UAP on original model and defended model. Comparing the entropy spectrum of clean samples and UAP perturbed samples on the original model, sPGD, LaVAN, GAP and SGA will cause layer-wise entropy to drop. While for UAP perturbed samples on Democratic Training enhanced model looks similar to that of clean samples, which suggest that our defense is able to mitigate the effect of different types of UAPs on entropy spectrum. We will include the entropy plots in the revised version.\\n\\n2. Thanks for the comments, $H(i)$ represents the layer-wise entropy described in equation (3) for sample $i$. We will improve our presentation in the revised version. Different from generating a UAP, SG generates low entropy samples for model finetuning. Since we focus on targeted UAP attacks, methods relying on generated UAPs often require generating UAPs of a set of target classes when the attack target is unknown. However, SampleGenerator does not need target class information. Furthermore, we compare the effectiveness of adversarial examples and low entropy samples generated by SampleGenerator when finetuning a given model in RQ3. The results show that, with the same finetuning process setting, adversarial examples are less effective compared with low entropy samples in mitigating the effect of UAPs. Hence, based on above results, low entropy samples generated by our SampleGenerator are more powerful compared to adversarial examples and UAPs, which does not require any information on the attack target class. \\n\\nReferences\\n\\n[1] Costa, J. C., Roxo, T., Proen\\u00e7a, H., & In\\u00e1cio, P. R. (2024). How deep learning sees the world: A survey on adversarial attacks & defenses. IEEE Access.\\n\\n[4] Liu, Xuannan, et al. \\\"Enhancing generalization of universal adversarial perturbation through gradient aggregation.\\\" Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023.\"}", "{\"title\": \"Response to reviewer hAa5\", \"comment\": \"Thanks for your encouraging feedback and for increasing your confidence in the positive rating. We are pleased that the additional results have reinforced your evaluation, and we greatly appreciate your thoughtful review and support of our work.\"}", "{\"summary\": \"The author aims to improve the robustness of universarial adversarial by fine-tuning with a small amount of data, mainly by performing entropy based data augmentation to suppress the influence of general adversarial perturbations in a given model. Then the experiment was presented.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1, The setting is reasonable, author want to resist universal adversarial samples through small cost, this may be useful in some situation.\\n2, Good experiment results.\", \"weaknesses\": \"1, The method provided in the article is not novel, overall, it is still based on the 'min-max' methods. And in my opinion, it seems to be a weakened version of adversarial training.\\n\\n2, The symbols are somewhat confused. For example, in euqation (4), author use $L_{cce}$, I think this means cross entropy loss, but author did not introduce what is $L_{cce}$ before the euqation (4); in algorithm line 3, author define $I_b^{en}$ which is got by SampleGenerator, but do not use it in the following of algorithm, or maybe it have been wirtten as $i_{en}$ in line 4?\\n\\n3, I try to test the author's algorithm on CIFAR10(VGG16, budget 8/255), but I didn't get such a good result shown in table2, the SR has only decreased by about 10%. Author did not submit the code, so I hope the author can provide a detailed introduction to the settings for Algorithms 1 and 2 (For example, how to select the epoch, learning rate, hyperparameters), and it's best to provide a detailed explanation of each symbol and steps in the algorithm.\", \"questions\": \"1: The author obtained the idea of Democratic Training by studying the performance of UAP between different layers in network. I want to know why author said 'Democratic Training does not rely on generating UAPs and are thus not limited to specific UAP attacks.', As mentioned in the article, the UAP analyzed by the author is produced based on FA-UAP, so the properties analyzed should mainly for such kind of UAP, and the algorithm guided by this should also mainly target on FA-UAP. If it cannot be argued that all UAPs have such properties, then this statement seems unreasonable?\", \"2\": \"The author did not provide a detailed explanation for $H(i)$ in algorithm 2 line 2, according to the equation (3), I think that the author is trying to say $H(i)$ is an entropy loss. Less strictly speaking, maximizing H(x) means that the various components of x are directly averaged as much as possible. So, is the goal of SG(SampleGenerator, Algorithm 2) equivalent to finding an adversarial sample with each component average? Is this a form of weakened PGD? If so, why is finding weaker adversarial samples beneficial for improving robustness? Why not directly target finding UAP for SG?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"no\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper presents an investigation into the defense against universal adversarial perturbations (UAPs), with a particular focus on targeted UAPs. The authors have made a notable observation regarding the entropy spectrum in hidden layers when UAPs are introduced, which serves as the cornerstone for their proposed 'democratic training' approach. This method aims to enhance the adversarial robustness of neural network models. The empirical results provided in the paper demonstrate the efficacy of the approach, as well as its ability to maintain the clean accuracy of the model.\\n\\nIn general, this paper is well-structured and presents a novel approach, which meets the basic criteria of ICLR conference. However, there are some aspects that could be improved or expanded upon to enhance the overall quality and impact of the paper.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. This paper is well-written and easy-to-follow.\\n2. The paper makes a commendable observation concerning the entropy spectrum in deep neural network layers, which is a significant contribution to the field and forms the basis for the proposed defense mechanism.\\n3. The efficiency of the proposed democratic training method is noteworthy. It circumvents the need to generate UAPs during training, instead utilizing a limited number of epochs to identify low-entropy examples, which is a resourceful approach.\", \"weaknesses\": \"1. The threat model employed in the experiments primarily utilizes gradient-based attack methods. These methods presuppose access to the model's parameters, aligning with white-box attack scenarios. This appears to be at odds with the assertion in Section 2.3 that adversarial knowledge does not extend to the internal parameters of the model. Clarification on this point would be beneficial.\\n\\n2. The comparison with adversarial training methods may require further refinement. Adversarial training aims to bolster adversarial accuracy by integrating adversarial examples with clean examples during training. Constraining the number of training epochs could result in an underfit model, which may not provide a fair benchmark. Additionally, it would be advantageous to include a comparison with the widely recognized TRADES method[1], which is absent from the current manuscript.\\n\\n3. The potential for adaptive attacks warrants consideration. If adversaries are aware of the defense strategy, they could tailor adversarial examples to bypass the defense. I know that in the threat model, no adaptive attacks are considered since the attackers do not know the internal details of the models. However, the chosen attack methods in the experiments inherently rely on gradient information. So I would suggest that the authors should consider the potential for adaptive attacks.\\n\\n4. The scope of the experiments is largely limited to datasets comprising natural images. It would be beneficial to extend the evaluation to smaller-scale datasets, such as CIFAR-10, to complement the findings and potentially leverage open-source robust models for further exploration of the neuron entropy spectrum concept.\\n\\n5. While the paper discusses various existing defensive methods against UAPs and includes experimental comparisons, a direct comparison with state-of-the-art methods is missing. It is recommended to condense the background section and incorporate a more thorough comparison with leading-edge techniques.\\n\\n6. Minor Issues\\n (1) Please consider to reduce the margin between Figure 1 and the text.\\n (2) Suggesting to add necessary notations (SR) from the main test to the Table 2 for better understanding.\\n\\n\\n[1] Zhang, Hongyang, et al. \\\"Theoretically principled trade-off between robustness and accuracy.\\\" International conference on machine learning. PMLR, 2019.\", \"questions\": \"See weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thanks for your additional results. I will maintain my score and increase my confidence in this positive rating.\"}", "{\"comment\": \"Dear Reviewer xSaF,\\n\\nThanks for the recommendation and the machine we used to run all reported experiments is with 96-Core 1.4GHz CPU and 60GB system memory with an NVIDIA 24GB RTX 4090 GPU. We mentioned this in the original paper Section 4 but understand that it will be much clearer if we have provided this information in above comments.\"}", "{\"comment\": \"Most of the conclusions in the author's paper and response come from experimental results, and I am willing to believe in the author's experimental results, so my problem has been solved and I will improve my score. But considering that I haven't seen the author's code and detailed experimental setup, my repeated experiments on other datasets are not satisfactory, so I have to lower my confidence.\"}" ] }
4LiegvCeQD
IEL: Intra-Model Ensemble Learning For Single Sample Test-Time Adaptation
[ "Aidan Remington", "Yash Gondkar", "Wei Ding", "Ping Chen" ]
Test-Time Adaptation (TTA) problems involve adapting pre-trained models to new data distributions in testing time, with access to only model weights and a stream of unlabeled data. In this work, we present IEL, a method for adapting sets of independently pre-trained classifiers to distribution shifted data one sample at a time without labels. We minimize the cross-entropy between the classifier output that has the highest predicted probability for the majority voted class (a high confidence softmax) and all other models in a set of classifiers. The majority voted model that all others learn from may change from sample to sample, allowing the group to collectively learn from each other. Our method uniquely optimizes all trainable parameters in each model and needs only a single sample for adaptation. Using sets of independently pre-trained base classifiers with distinct architectures, we show that our approach can reduce generalization error for image classification tasks on corrupted CIFAR-10, CIFAR-100, and ImageNet while also minimizing the entropy of model outputs.
[ "Test-Time Adaptation", "Ensemble Learning", "Entropy-Regularization", "Knowledge Distillation" ]
Reject
https://openreview.net/pdf?id=4LiegvCeQD
https://openreview.net/forum?id=4LiegvCeQD
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yW13kpYk61", "iUJvUGB5tZ", "fKikuSAzWS", "ailA5bBAnD", "WFLjRywPqv", "UCP9sAY14c", "Qc7ZJI7b83", "QTIq9MTU6W", "PN9FOJbr5L", "GPqFNlLj1C", "CD3ImcQbpE", "6HWVN0sg2N", "3aW3NFJx0K", "1OMnBIZsBU", "1IhOjHVXNI" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_review", "decision", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "meta_review", "official_comment", "official_comment" ], "note_created": [ 1732670640964, 1732241213509, 1732685244781, 1730292999677, 1737523676529, 1732242873341, 1730326588602, 1732240722810, 1732650433986, 1732239850570, 1730735138772, 1730485911919, 1734747750002, 1732240610069, 1732655856145 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission5005/Reviewer_LBnv" ], [ "ICLR.cc/2025/Conference/Submission5005/Authors" ], [ "ICLR.cc/2025/Conference/Submission5005/Reviewer_4Chx" ], [ "ICLR.cc/2025/Conference/Submission5005/Reviewer_4Chx" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission5005/Authors" ], [ "ICLR.cc/2025/Conference/Submission5005/Reviewer_SPHA" ], [ "ICLR.cc/2025/Conference/Submission5005/Authors" ], [ "ICLR.cc/2025/Conference/Submission5005/Reviewer_SPHA" ], [ "ICLR.cc/2025/Conference/Submission5005/Authors" ], [ "ICLR.cc/2025/Conference/Submission5005/Reviewer_LBnv" ], [ "ICLR.cc/2025/Conference/Submission5005/Reviewer_ZZ56" ], [ "ICLR.cc/2025/Conference/Submission5005/Area_Chair_aMLN" ], [ "ICLR.cc/2025/Conference/Submission5005/Authors" ], [ "ICLR.cc/2025/Conference/Submission5005/Reviewer_ZZ56" ] ], "structured_content_str": [ "{\"comment\": \"Thank you for your response.\\n\\nThe reviewer acknowledges that the proposed method has its unique feature of intra-model ensemble learning, which is simple yet straightforward. It would be a great approach to tackle TTA task if its algorithmic details and empirical studies are more thoroughly elaborated. \\n\\nAdditionally, the reviewer suggests that the authors consider using a small-sized pre-trained model to conduct experiments, especially if computing resources are matter. Since the paper has mainly focused on the single sample adaptation setup (batch size of 1), it would be possible to experiment on the large-scale dataset (e.g., ImageNet) with pre-trained model of moderate sizes.\\n\\nOnce again, the reviewer appreciates the authors' responses.\"}", "{\"comment\": \"Thank you for taking the time to read our paper and give your review. We are currently working on some of the experiments you've recommended and will update this comment as we get new results.\\n\\n1. We are not aware of any evidence to support that minimizing the diversity of the ensemble is beneficial for TTA, but this is also why we found our results interesting. The de facto standard in many ensembling techniques is that diversity is beneficial for the ensemble error as the classification weaknesses of some members can be compensated for by stronger members, however, our work seems to work against that standard since our ensemble can improve while minimizing its diversity.\\n\\n2. We are currently studying the general relationship between IEL and TTA performance, particularly in settings where non-stationary distribution shifts are encountered. We will update this comment in the future as we get results.\\n\\n3. We do not have the experimental results that you mentioned in your question. \\n\\n4. Conventional TTA approaches typically operate on only a single model at a time, and so any adaptations made are inherently dependent on the pre-trained knowledge of a single model. With IEL we are able to depend on the pre-trained knowledge of several models. For example, if the member model in the ensemble with the highest classification accuracy makes an incorrect prediction on a given sample, then other members have the opportunity to correct them, unlike in TTA methods that depend on only a single model.\"}", "{\"comment\": \"I appreciate the response provided by the authors. I have checked their responses and look forward to updates reflecting the new results. Below are my thoughts regarding their responses, which I hope will be considered during the revision of the paper.\\n\\nI agree that reducing the diversity of the ensemble may surprisingly be beneficial in the TTA setting, and this is a noteworthy point. This is particularly intriguing because it challenges the conventional perspective presented in prior studies and offers a new viewpoint. However, to convincingly demonstrate a new perspective that contradicts the established understanding, it is essential to support the argument with more robust theoretical and experimental evidence. Unfortunately, the current manuscript does not provide sufficient evidence for this claim. Additionally, I am curious to know why reducing ensemble diversity is particularly helpful in the TTA setting. Is this claim specific to the TTA context, or could it have more general applicability? The answer to this question would likely influence the direction of future research.\"}", "{\"summary\": \"This work introduces a new method for Test-Time Adaptation (TTA), Intra-model Ensemble Learning (IEL), that optimizes multiple models for ensembled inference. In the IEL framework, the output of the model with the highest probability for the class that received the most votes is set as the pseudo-label, and all other models are optimized to output this pseudo-label via minimizing the cross-entropy. This process minimizes the diversity within the ensemble and aligns the outputs of the models to ensure mutual agreement. Experimental results show the effectiveness of the proposed framework.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. This paper is well written and easy to follow along with their reasoning.\\n2. The idea of utilizing ensemble methods for TTA is simple, intuitive and easy to adapt.\\n3. Experiments were conducted on a single-sample setting, which is one of the challenging tasks in TTA.\", \"weaknesses\": \"1. There is insufficient justification for minimizing the ensemble diversity. Personally, I believe that as diversity increases, performance can be improved in general including TTA since the information is also increases. For example, in [1] which is also referenced in this manuscript, it was shown that increasing diversity can lead to a lower loss. Additionally, the counterexample of ensemble of models with 100% performance (Lines 161\\u2013166) is unrealistic and therefore inappropriate for supporting the claim. If there is a large distribution shift and the source-trained models perform poorly on target dataset, reducing diversity may actually have an adverse effect. In conclusion, it remains unclear how reducing ensemble diversity benefits TTA.\\n\\n2. Due to multiple assumptions, the scope of application of this research is limited. The authors assume there are multiple source-trained models (Lines 230\\u2013232), but it is questionable whether this assumption is easily met in practice. Furthermore, the assumption of stationary distribution shifts (Line 265-266) raises concerns about whether IEL would be effective in other TTA scenarios such as online imbalanced label distribution shifts or mixed distribution shifts [2].\\n\\n3. The experiments conducted do not sufficiently demonstrate the effectiveness of IEL. For example, the authors should include 1) performance comparisons with various previous works on TTA, 2) ablation study on the number of ensemble models, and 3) comparisons of computational costs associated with using multiple models. As mentioned earlier, including experiments across diverse TTA scenarios would also provide a more comprehensive understanding of IEL\\u2019s effectiveness.\\n\\n[1] A unified theory of diversity in ensemble learning\\n[2] Towards stable test-time adaptation in dynamic wild world\", \"questions\": \"1. Is there any theoretical or empirical evidence that a minimizing the diversity of the ensemble is beneficial for TTA?\\n\\n2. In Figure 2, it shows that predictions with lower IEL loss have lower entropy. However, previous study [3], also cited in the paper (Lines 101\\u2013105, 173\\u2013176), claimed that reducing entropy of output during training can be problematic for TTA. This raises doubts about whether lower IEL loss actually leads to higher TTA performance, and I am curious whether there is any evidence to verify the relationship between IEL and TTA performance. \\n\\n3. Are there any experimental results that address the weaknesses mentioned (e.g., comparisons with previous studies, ablation study on the number of models, computational cost comparisons, and performance comparisons across various TTA scenarios)? \\n\\n4. If there are multiple source-trained models, what advantages does IEL offer over an approach that applies conventional TTA methods to each model individually and then performs an ensemble?\\n\\n[3] Entropy is not enough for test-time adaptation: From the perspective of disentangled factors\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"Thank you for taking the time to read and review our work.\", \"our_responses_to_your_review_are_listed_below\": \"1. Although IEL is rather simple, we believe its simplicity is a strong feature. Our method does not beat existing ensemble or TTA algorithms, but we feel our novelty is in that we uniquely train a set of models, unlike in many standard TTA methods, by having the models learn from each other in a closed system. We find that IEL resultingly minimizes the Shannon Entropy of member model outputs (Figure 1), which is a commonplace optimization signal in many TTA methods, while relying explicitly on an Ensemble-based optimization signal. \\n \\n2. Due to computational restrictions at our public university we were unable to carry out certain experiments. We will attempt to compare the NOTE and REALM methods to ours. In the single-sample setting, both NOTE and REALM inherently depend on batch statistics, whereas we do not. Going forward, we will leave room for comparisons to SOTA TTA methods like NOTE and REALM. We will update this comment as we get results.\\n\\n3. We will add a clear and detailed description of the tuning set samples used in our experiments. The correlation between the number of ensemble models and TTA performance was not more rigourously tested due to computational restraints.\\n\\n4. We agree that a larger subset of ImageNet-C is required to accurately assess the performance gains given by IEL. Our computational resources are limited in our academic university lab, unlike at larger companies, but this is a good suggestion that we will proceed with over time.\"}", "{\"summary\": \"The submission describes a method for adapting an ensemble of classifiers to a new data distribution using only unlabeled data. Specifically, the classifiers are trained sequentially soft pseudo-labels, which are the output of the model that has the highest predicted probability for the majority class across classifiers. Experiments are performed on standard datasets (CIFAR10, CIFAR100, ImageNet) with synthetic distribution shifts for an ensemble consisting of 5 pre-trained image classification models.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"Strengths:\", \"the studied problem is realistic and relevant\", \"the propose method makes sense and is simple enough for practical use\", \"experiments show improved accuracy by the proposed test-time-adaptation\"], \"weaknesses\": [\"Unfortunately, the submission has a number of weaknesses.\", \"incomplete method\", \"The method works iteratively but the submission does not provide a termination condition. This is a not minor issue, as the manuscript makes it sound: without a termination condition, the method cannot be run in practice, and because no labeled data is available at test time, standard techniques, such as checking for a drop of validation accuracy, cannot be applied, either.\", \"lack of clarity in contribution and scientific comparison to prior work\", \"The submission almost exclusively showcases the proposed method itself, but it does not put the method into context and competition with prior or alternative techniques. This leaves the novelty and relevance of the contribution unclear. Specifically, for a publication at a scientific venue, I expect that the novel aspect of the proposed method is clearly described, contrasting it to ideas and techniques that already existed in previous works. The manuscript does not do a good enough job doing so. As related work it concentrates almost exclusively on recent papers from the application domain of test-time adaptation with deep networks. However, ensemble methods have been studied extensively in machine learning, e.g. [Alaydin, \\\"Introduction to Machine Learning\\\", 2014; Chapter 17], and self-training methods for learning from unlabeled data also have a long history, going back at least to [Fralick, \\\"Learning to Recognize Patterns without a Teacher\\\", 1965], and having emerged many times afters, e.g. in the context of semi-supervised learning (e.g. Chapelle \\\"Semi-Supervised Learning\\\", 2006) and also test-time adaptation, e.g. [Royer et al, \\\"Classifier adaptation at prediction time\\\", 2015] . Even for adapting ensemble methods self-training was proposed before, e.g. [Ghosh et al, \\\"Self Training with Ensemble of Teacher Models\\\", 2021]. In light of extensive prior work, the manuscript should make clear what the technical novelty of the proposed method is.\", \"lack of baselines in experiments\", \"The experiments evaluation presents results for the proposed method, but it does not contrast them against prior works. Consequently, the reader only learns that using proposed method often (but not always) works better than not doing test-time adaptation, but if the method is better than already existing methods for test-time-adaptation. It would also be important to see at least a comparison to obvious baselines, such simply self-training each network individually. For the specific choices made, e.g. using the majority vote prediction as target label but the softmax scores of the most confidence classifier, an ablation study would be needed if this is actually useful, or if maybe using hard labels or the ensemble average confidence would work equally well (or better).\", \"unsubstantiated claims\", \"The manuscript contains factual claims, e.g. about design choices that are not self-evident but also not substantiated with evidence. Some of these appear misleading and/or are not credible, see \\\"Questions\\\". The counterexample on page 4 seems to be a misunderstanding of prior work. Claims in the literature are not that diversity is *necessary* for a good ensemble. Indeed, as the submission states, an ensemble consisting of many copies of a perfect classifier are still perfect. But rather, diversity of prediction *mistakes* between models in an ensemble can provably beneficial accuracy. Without any errors, that notion is not defined. But if mistakes do occur, the variance of predictions is decreased if errors are negative correlated (e.g. (17.5) in [Alpaydin, 2014]).\", \"shortcomings in the experimental protocol\", \"Several aspects about the experimental evaluation are unclear or misleading (see questions below).\", \"The reported accuracy value in Tables 1-3 are \\\"highest accuracy improvements\\\" over all epochs.\", \"That means they are not unbiased estimates, but potentially overconfident.\", \"The description of ensemble elements is not clear from the text, it currently seems only provided in the Table headers.\", \"The specified regularization constant $\\\\alpha=10e^{-11}$ (which should probably be $\\\\alpha=10^{-11}$) is so small that no regularizing effect is mathematically possible even after hundreds of thousands of steps. I would expect $\\\\alpha=0$ to work equally well.\", \"The exact variant of \\\"ImageNet\\\" dataset used is not clear. Best provide a source how the data was obtained and prepared.\", \"The results table lists differences in accuracy, but not absolute accuracy numbers, which would be useful to judge e.g. if the base classifiers are suitable for the task.\", \"It is not specified how the method's hyper-parameters were selected.\", \"Given the mixed positive and negative results, a test of significance should be performed if the reported results are not equally well explained by random chance (e.g. Wilcoxon signed rank).\"], \"further_comments\": [\"I found the analogies with human learning or research (for example the top of page 3) rather superficial, and I would recommend to remove those.\", \"The reference section should be corrected. Many papers are listed as arXiv or without publication venue, which actually have been published, e.g. [Arazo et al, IJCNN 2020], [Bucila etal, KDD 2006], ...\"], \"questions\": [\"What are the key technical differences of the proposed method to simply applying gradient-based self-training to an ensemble classifier, one sample at a time?\", \"What are the key technical differences to prior work on ensemble self-training, such at [Ghosh, 2021]?\", \"Please clarify these claims: 1) page 3: \\\"we minimize the diversity of the ensemble [...] in a way that facilitates generalization of the source training domain to the new testing domain.\\\" What does \\\"facilitates generalization\\\" mean here? Just higher test accuracy? In what way is lower *diversity* of the ensemble an important factor for that? 2) In the conclusion: \\\"member models are optimized and improved like in knowledge distillation and create an upward spiral effect\\\". What do you mean by upward spiral effect? E.g. from Figure 3 it appears that model accuracy goes up and down over epochs.\", \"How were the method's hyperparameters chosen, given that standard model selection is not possible in the test-time adaptation setting?\", \"Which variant/subset of ImageNet was used? Is the \\\"test\\\" data the actual \\\"test\\\" set or the \\\"validation\\\" part?\", \"The manuscript emphasizes a batchsize of 1, but multiple *epochs* are run on the adaptation data. Does this mean that your method must buffer all test data, or at least see it repeatedly, such that statistics can be collected? Wouldn't batch-based TTA techniques be also applicable then?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"No ethics review needed.\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": [\"Thank you for taking the time to read our work and give your review, we are grateful that you think our work has potential. In light of the glaringly absent experiments that you pointed out, we are deciding to revise the paper for a future submission. All of the clarity and technical issues with that writing that you mentioned are being worked on now (W1, W3, W4, W6). Our replies the particular issues you raised are given below:\", \"W2: Before we got to the bulk of the experiments detailed in the paper we tried using a naive baseline as you described: an ensemble where each individual model has TENT applied to it in parallel. In these experiments we found that we consistently underperformed compared to the naive baseline, but felt that the core of our novelty was in that 1) we make progress on TTA problems using non-standard methods (ensemble pseudo-labeling is popular in self-training, but less so in TTA, and the Shannon entropy signal makes no appearance in our loss function) and 2) we enable models to learn from each other when incoming data is unlabeled and has its distribution shifted from that of the original training data.\", \"W5: This is a good observation that we overlooked. Going forward, we will emphasize that by using multiple epochs we inherently allow storage of incoming data, and so any methods that utilize batch calculations should also be usable given our assumptions.\", \"W7: Going forward we plan to carry out more comprehensive experiments that relate IEL to other popular TTA methods. In particular, it seems that IEL can be used in tandem with many TTA methods that depend on optimization of only batch normalization layer parameters, like TENT and EATA. Could you say more about what you mean by \\u201cif we consider the EATA loss function for ensembling\\u201d?\", \"W8: Due to computational limitations at our public university, we were not able to re-run our experiments while varying the number of models in the ensemble. For similar reasons, we were also not able to investigate the long run behavior of our approach. We are currently working on running experiments for both of these issues.\"]}", "{\"comment\": \"Dear authors,\\nthank you for your clarifications. It, and the other reviews, confirm my impression that the work is currently not suitable for publication at a top tier venue such as ICLR. I maintain my recommendation of rejection.\"}", "{\"title\": \"Q1-Q4\", \"comment\": [\"Thank you for taking the time to read our work. The issues with our experiments that you pointed out are concerning and carrying them out requires more time than the discussion period permits, so we are revising the work for a future submission. Our replies to the particular issues and questions that you gave are listed below:\", \"You mentioned that you expected us to get the same results when we set out regularization constant to 0. We have re-ran the CIFAR-100 experiment detailed in section 4 with the regularization constant set to 0 and found that the resulting TTA ensemble learns nothing and makes the exact same predictions as a static ensemble, contrary to the expectation. We chose to use a small regularization constant simply because it was what gave the best results for us in practice. When increasing the learning rate slightly above the small one listed in the paper (say from 10^-11 to 10^10) we found that the performance gains of our TTA ensembles were significantly diminished. We are currently considering why such a small learning rate is necessary, and a more extensive parameter search is also being worked on.\", \"Q1: Our proposed method is precisely as you describe, an application of gradient-based self-training to an ensemble classifier, one sample at a time. Although prior works on self-training typically assume that unlabeled data for adaptation and original training data are identically distributed, we extend the use-case of a rather conventional self-training technique (gradient based self-training with psuedo-labels) to scenarios where distribution shifts are present (Test Time Adaptation use case) with moderate success. No SOTA TTA records are broken with our approach, but we experimentally show that simple self-training approaches can have potential to succeed in TTA problems, which we find particularly interesting given the recently proposed issues with the Shannon Entropy as an optimization signal for TTA problems (\\u201cEntropy is not Enough for Test-Time Adaptation: From the Perspective of Disentangled Factors\\u201d Lee et al.). Although optimization of our proposed signal consequently minimizes the Shannon Entropy (Figure 1), we felt that our approach was still novel since the Shannon Entropy does not explicitly appear in our loss function (like it does in many other TTA works).\", \"Q2: There are two key technical differences between the method proposed in \\u201cSelf Training with Ensemble of Teacher Models\\u201d (Ghosh, 2021) and our work. First, the authors use one-hot encoded psuedo-labels as opposed to the soft-labels that we use, and second, the authors use an average softmax as the ensemble output, whereas we use a majority vote. Our use of soft-labels was motivated by the work \\u201cDistilling the Knowledge in a Neural Network\\u201d (Hinton, 2015), where it was found that soft-labels can provide significant information about more than a single class, unlike one-hot pseudo-labels. The use of centroid-based ensembles, like using the mean of member model outputs as the ensemble output, has been well-studied, in part because the the arithmetic mean (and other means) has nice properties (differentiable and can be easily written mathematically) for use in theorems and proofs. With the unpopularity of majority voting and our use of soft-labels, we felt that our approach was different enough from conventional self-training methods to be novel. Admittedly, more papers were read on TTA than self-training during the writing of our paper.\", \"Q3: When we used the phrase \\u201cfacilitates generalization\\u201d we meant that our method improved the performance of classifiers on data from a different distribution than the source training data. The relationship between ensemble diversity and ensemble prediction accuracy is still not well-understood, with recent work only focusing on mean-based ensembles when training and testing data are identically distributed and when models are held constant (no back propagation) - \\u201cA Unified Theory of Diversity in Ensemble Learning\\u201d (Wood, 2024). In Wood\\u2019s Bias-Variance-Diversity decomposition of the expected test error (Theorem 5, Page 10), minimizing the diversity should negatively impact the error if the bias and variance terms are held constant. However, since our TTA ensembles are neither held constant, nor use identically distributed train/test sets, nor use a mean-based combination scheme it is unclear how simple changes in the ensemble diversity should affect the remaining terms and, consequently, the expected test error. Still, we felt that our positive results were interesting compared to Wood's error decomposition.\", \"Q4: Hyperparameters were chosen by a trial-and-error search along parameter intervals until desired test set performance was achieved. In practice, this should be impossible to achieve since we do not allow the labels needed for evaluation test set performance. This is an oversight that is currently being revised.\"]}", "{\"summary\": \"The paper introduces an intra-model ensemble learning method for single sample test-time adaption. It minimizes the cross-entropy losses between the output with the highest confidence score and all other classifier's outputs. It optimizes all trainable parameters (except for BN layers) and requires only a single sample for TTA. It achieves improved performance on corrupted datasets including CIFAR10-C, CIFAR100-C and ImageNet-C.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The paper addresses a challenging TTA settings where only a single, unlabeled sample is given for adaptation during test time.\", \"The paper adopts an interesting approach of ensemble learning to dynamically optimize a group of learners (pre-trained models), showing improved TTA performance.\"], \"weaknesses\": [\"The proposed algorithm offers no substantial improvement over existing ensemble learning methods. It simply combines 1) selecting the most confident prediction and 2) cross-entropy minimization of ensemble models. Technical contributions to both ensemble learning and single-sample TTA remain limited.\", \"The paper lacks sufficient experimentation to demonstrate the proposed method\\u2019s effectiveness. It only compares results across different backbone architectures, without considering other baseline methods suitable for single-sample TTA, such as NOTE [1] and REALM [2]. Additionally, it does not explore alternative TTA settings, such as continual TTA, where incoming domains continuously change.\", \"The experiment section (Sec. 4) requires more careful writing and clarification. For instance, it should include a clear definition and detailed description of the tuning set samples, as well as more comprehensive experiments, including ablation studies, to examine the correlation between the number of ensemble models and TTA performance.\", \"In Section 4.1, the authors state that no catastrophic forgetting was observed on the ImageNet-C dataset. However, this is unlikely to be accurate since only 7,000 samples per corruption type from ImageNet-C were used for evaluation. More rigorous experiments and substantiated claims are needed.\", \"As noted in the limitations, the proposed method requires significant computational resources to utilize multiple pre-trained networks. However, the paper does not provide any empirical analysis or comparison of computational cost or adaptation latency.\", \"[1] Gong et al., Note: Robust continual test-time adaptation against temporal correlation. NeurIPS'22. \\\\\", \"[2] Seto et al., REALM: Robust Entropy Adaptive Loss Minimization for Improved Single-Sample Test-Time Adaptation. WACV'24.\"], \"questions\": \"Please refer to the weaknesses part.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"There is no potential violation of the CoE.\", \"rating\": \"1\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The authors propose a test-time adaptation (TTA) technique based on model ensembling. A set of models is simultaneously trained, and for each sample, the model with highest confidence is used as a teacher for all student models. Updates happen via standard cross-entropy. The authors show improvements over a non-adapted baseline model across CIFAR10/100-C and ImageNet-C.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"The technique is simple and elegant. It generalises the typically used concept of a student/teacher setup which typically uses a single model, and is straightforward to scale in practice (by increasing the size of the ensemble).\", \"weaknesses\": \"**Summary**\\n\\nThe presentation and investigation done in the paper is well below the bar for ICLR. There are no baselines, and the results are not well presented and contextualised. The introduction to the paper is lengthy, and should be made more crisp. The contribution statement is not accurate and needs to be adapted to what is actually shown in the paper. A lot of important controls and analysis experiments are missing to back up the claims. While I think that the general idea has potential, it needs a much better investigation to be considered for acceptance. A list of actionable points is given below. \\n\\nFor full transparency, while I would be happy to increase the score if my points are addressed, I doubt this is doable within the rebuttal period. Depending on the other reviews, I would already suggest that the authors consider a full revision of the paper and submission of their paper to the next conference. That being said, I think this could become a very nice paper if more work is spent on the experiments, and I would be happy to iterated with the authors during the discussion period.\\n\\n**Major**\\n\\n**W1** There is a very lengthy introduction. The method section only starts on page 5. Before, a fair amount of related work is cited (great), but without considering any of that later as methods for comparisons.\\n\\n**W2** A naiive baseline is omitted: What happens if a state-of-the-art adaptation technique like EATA, or even older/simpler techniques like TENT are used, but adapting $n$ models in parallel, and then ensembling of the outputs?\\n\\n**W3** Section 3 needs a rewrite and improvements to clarity. For example, basic metrics like the cross-entropy loss are abbreviated with an extra symbol $\\\\delta$, it would be better to clearly define the loss formulation used instead.\\n\\n**W4** The contributions reference \\u201ccontinual learning\\u201d (l. 130, l. 134), but there is no experiment backing this up. The reference should be removed.\\n\\n**W5** Claim 3 in the contributions (ll. 135-136) states that TTA requires a full batch. This is misleading or even wrong. There are certainly techniques that measure improvements when only a single sample is available (= model is adapted on that sample, and performance is measured) before the domain changes (e.g. Batch norm adaptation, or MEMO). However, in the setting considered here, model updates still accumulate on the same domain. In that setting, any TTA technique, like TENT, can be made suitable for the discussed setting by only updating every N steps (where N is the batch size), collecting the samples in the meantime.\\n\\n**W6** Claim 4 in the contributions is not at all corroborated by results in the paper and should be dropped.\\n\\n**W7** The paper needs to add recent methods to compare to. The current tables provide a lot of irrelevant details, and should be compressed. Instead of listing all different domains, it would be better to run a sufficient number of baseline models on the considered datasets to contrast the results. When doing so, it could be interesting to investigate whether the ensembling mechanism proposed is *orthogonal* to other adaptation methods: E.g., if we consider the EATA loss function for ensembling, does this improve over EATA?\\n\\n**W8** Analysis should be added: What happens if the number of models in the ensemble varies? How robust is this technique to common problem in cross-entropy based adaptation (model collapse) when training over long time intervals?\\n\\n**W9** In case the authors decide to keep the continual learning claim: How does the model perform in the continual adaptation settings used in CoTTA or EATA, and how does the model perform on long-term adaptation on CCC (Press et al., 2023)?\\n\\n**Minor**\\n\\n- Related work in l. 178: Batch norm adaptation (Schneider et al., 2020) re-estimates batch norm statistics, and this was shown to also work on single samples. Test time training (Sun et al., 2019) also considers single samples. TENT, in contrast, *additionally* performs entropy minimisation akin to the cross-entropy loss discussed in the study.\\n- Figure 1 misses a legend. The color code could be adapted, and e.g. corruptions of the same type could get the same color with different marker colours. That would make the plot more interpretable than the current 15 random colours.\", \"questions\": \"Please see the weaknesses above that are directly actionable. No further Qs.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"Test-time adaptation aims to update a model online given test data to maintain or improve accuracy and other task metrics. This work adapts not a single model, as is usually the case, but a set of models that is applied as an ensemble. For prediction the majority vote is taken, then for adaptation all the models are updated by cross-entropy with the highest probability output for the majority class as the target. While it is common for test-time adaptation methods to require batches or at least sequences of test inputs, this method can reduce generalization error given a single point at a time (which has been addressed by prior work, such as MEMO and SAR, but is still distinctive). The experiments evaluate on the standard benchmarks for this topic of ImageNet and CIFAR-10/100 with corruptions (ImageNet-C, CIFAR-100-C, CIFAR-10-C). The results show consistent improvement over static ensembles without test-time updates.\", \"strengths\": [\"Test-time adaptation is a popular topic where progress is being made, and this work focuses on 1. a particularly challenging and relevant setting of single input adaptation (LBnv, SPHA, 4Chx) and 2. ensembling which is understudied for this purpose (LBnv, 4Chx)\", \"The method is simple and scalable since the size of the ensemble can be varied (ZZ56), as can the size of models in the ensemble (LBnv).\"], \"weaknesses\": [\"The experiments compare to ensembling, but lack baselines for test-time adaptation, which are the natural comparisons (LBnv, ZZ56, SPHA, 4Chx). Without evaluating existing adaptation methods, and ideally measuring multiple settings like episodic and continual test-time adaptation (LBnv, 4Chx), it is difficult or impossible to gauge the improvement due to the proposed method.\", \"Test-time adaptation of an ensemble requires the computational overhead of inferring and updating the set of models in the ensemble. However, this computational overhead is neither measured, discussed, or mitigated (LBnv).\", \"The claims and the evidence are disconnected in the results of this work or in its exclusion of prior works (ZZ56, SPHA). This includes the existence of single input adaptation methods (like BN, MEMO), the study of continual settings by CoTTA/EATA/CCC, and the off-topic and unsubstantiated claim of relevance to human collaboration (stated by the 4th bullet point of the introduction).\"], \"rationale_for_decision\": \"Four expert reviewers agree on rejection (LBnv: 1, ZZ56: 3, SPHA: 3, 4Chx: 3). The meta-reviewer agrees with the raised weaknesses, and would like to highlight the missing baselines such as parallel adaptation of multiple models and the missing comparisons to existing work. Given the important omissions in the experiments and related work, the weaknesses are more important than the strengths, and the meta-reviewer sides with rejection.\", \"additional_comments_on_reviewer_discussion\": \"The authors respond to each review, and the reviewers reply in turn to engage in discussion and confirm their evaluations. The authors do not further reply or revise the submission, but they do comment \\\"we are deciding to revise the paper for a future submission\\\" although the submission is not withdrawn.\"}", "{\"title\": \"Q5-Q6\", \"comment\": [\"Q5: For training the base models on uncorrupted data, we used the data provided in ImageNet as \\u201ctrain\\u201d. Corrupted data that was adapted to for IEL was taken from the \\u201ctrain\\u201d set provided in ImageNet-C. To evaluate the effectiveness of our adaptation to this corrupted \\u201ctrain\\u201d data we used data from the \\u201ctest\\u201d set of ImageNet-C. This \\u201ctest\\u201d set was split into two sets, one called validation_set and one called evaluation_set. Validation_set was used for hyper parameter tuning, while evaluation_set was used strictly for estimating test set errors. Each time we ran the experiment we randomly reselected the validation_set and evaluation_set samples from the larger \\u201ctest\\u201d set provided in ImageNet-C.\", \"Q6: You are correct, we have multiple epochs and so each unlabeled data is seen multiple times. Seeing the data points multiple times would require storage that should make batch based training possible. We plan to investigate how our approach adapts ensembles when data points are discarded after use.\"]}", "{\"comment\": \"Dear authors, please let me know if you are still revising the paper before the revision deadline -- I will stay put an remain happy to re-evaluate.\", \"w2\": \"It sounds like this should still be reported and analysed then.\", \"w5\": \"Thanks for acknowledging. It would be good to keep this in mind when deciding on a good set of baselines.\", \"w7\": \"With my comment, I meant that it might be possible to replace your particular formation of the loss with any self-learning loss, as the ensembling method is orthogonal to the self-training loss as far as I can judge. For ETA/EATA in particular, it might be interesting to convert the loss function and make it suitable for application in your framework. I see this more as an optional step though, which might get necessary if your formulation (without the bells and whistles in the EATA framework) does not outperform EATA -- so this point is optional.\", \"w8\": \"I think this would add a lot to the paper, thanks for considering. Note that performance on ImageNet-C can be deceptive, and some models that appear to work on ImageNet-C then collapse when running on e.g. CCC (Press et al., [Neurips 2023](https://proceedings.neurips.cc/paper_files/paper/2023/hash/7d640f377893fc5f22b5610e175ef7c3-Abstract-Conference.html)).\"}" ] }